Sunday, October 2, 2022

Comparison of Nuclear Accidents versus Nuclear Bombs: Part 2--Nuclear Weapons

Types of Hazards from Nuclear Weapons

I got a lot of information from this old (but still very informative) publication on the detailed effects of nuclear weapons: The Effects of Nuclear Weapons

I will divide the hazards of a nuclear weapon into three parts:

  1. Initial explosion. For a nuclear weapon, the initial detonation can be thought of as a very large conventional explosion “plus”. That is, it has all the characteristics of a regular explosion at a very large scale (heat, shock wave, shrapnel, etc.) plus a super powerful burst of thermal, gamma and neutron radiation. Nuclear weapons are far hotter than conventional weapons. This initial radiation burst has two practical consequences:
    1. A significant number of people at a certain distance from ground zero may survive the initial blast, but be partially “cooked” by the heat and radiation. This causes internal damage to victims of a unique kind.
    2. Nuclear weapons detonated in or above a city will cause everything flammable within a certain radius to burst into flames. Following this, gale-force winds will rush towards the center of the explosion to replace the air displaced upwards by the fireball / mushroom cloud. This will usually cause massing firestorms which will act as a secondary source of death and destruction after the initial blast.
  2. Near-term elevated environmental external radiation caused by the initial fallout. This is the “afterglow” of the nuclear explosion. Due to the intense radiation (via some mechanisms I will get into shortly), the immediate vicinity of a nuclear explosion will experience elevated radiation levels, which can cause significant health issues merely by being in the area and being exposed to the radiation.
  3. Long-term radioactive pollution, in the form of various radioactive by-products that can be absorbed into the body and cause various types of damage via internal radiation.

In order to to an apples-to-apples comparison with a nuclear meltdown or a dirty bomb, we would like to be able to specify the relative proportion each of these three hazards has to each other and to the known power of a specific weapon. That is, given a specific kiloton rating of a nuclear weapon, on what scale would we expect to see each of these types of hazards?

This turns out to be a very difficult question to answer in general terms. Figuring out the initial blast radius and expected damage from a nuclear explosion is not too difficult, but the other two hazards depend on much less easy-to-determine factors. I’m going to explain the factors here, but be aware that this will lead to a very wide range of possible conclusions depending on the circumstances.

Blast Radius

The number of people killed by an explosion is going to be directly related to the total area of the explosion. The total area affected by the initial explosion of a nuclear weapons is directly related to the power of the weapon, but it is not linearly proportional to that power. That is, a nuclear weapon that is 10 times as powerful as another one will not have a blast radius that is 10 times as large, nor even an affected area that is 10 times as large.

The reason for this is easy to see with a little bit of visual imagination. An explosion is a 3-D event; the energy from the explosion begins in the center and radiates out in all directions. In an idealized world, the affected space of the explosion is a sphere. The affected area on the ground will be (roughly) the shadow of this sphere. This means that while the volume of a nuclear explosion is going to be directly proportional to the power of the bomb, the radius will be (roughly) related to the cube root of the power of the bomb and the area will be (roughly) related to the cube root of the power of the bomb, squared.

In practical terms, there are some modifications to this because of reflection effects–the sphere gets squashed and spread out, as it were. The real-world factor between bomb yield and radius is actually the 2.5th root rather than the cube root. This means, for example, that if you increase the power of a nuclear bomb by 15 times, the affected area will only be about 8.5 times as large. Likewise, a 100 fold increase in power will yield only a 40 fold increase in affected area, a 1000 fold increase in power will yield only a 250 fold increase in affected area, and so forth.

Initial Fallout

For this section of the post, you should recall what has already been said about types of radiation in this blog post: “Key Points You Need to Know When Talking About Energy: Radiation”--namely, that neutron radiation has the capability of making other things radioactive. Neutrons released from the explosion slam into whatever they randomly hit. Sometimes, this will cause the neutron to become absorbed by the atom with which it collides. When this happens, that atom is now an unstable form of the same element: a radioactive isotope. Matter thus subjected to intense neutron radiation is said to be have “induced activity”. For such material, it is then only a matter of time before its unstable atoms break down and re-releases those absorbed neutrons again.

After a nuclear explosion, matter that is highly radiated by the intense neutron radiation of the nuclear fireball will be given this “induced activity”, and will subsequently immediately begin releasing decay radiation–a sort of radiation “echo” of the initial explosion.

It is important to understand that this radioactive material is of a different kind (for the most part) than the radioactive by-products of the fission reaction itself. The initial blast of fission is caused by high-neutron count isotopes of heavy actinides: plutonium, uranium, etc. These materials have a very long radioactive life, because their atoms are composed of such large, unstable clumps of protons and neutrons. When radioactive plutonium decays, it splits up into several by-products which are still large and unstable; it will end up going through a long series of decays (emitting radiation all the while) before it finally reaches a stable state.

On the other hand, if a silicon or manganese atom absorbs a neutron, it will become radioactive only for as long as it takes to re-emit that neutron (or otherwise decay). Induced radiation is therefore much more short-lived than heavy-isotope, fission by-product radiation.

Magnitude of the initial fallout variable

What is the rough magnitude of this initial fallout, compared to the yield of the particular nuclear weapon? This depends on how much material is irradiated, and what the strength of that radiation is. The strength of the radiation is directly proportional to the yield of the bomb, so this part of the factor will scale with the size of the explosion in general. However, the amount of material that is irradiated depends very highly on where the bomb explodes. A surface or very near surface detonation will cause massive amounts of dirt, dust, and debris to be sucked up into the mushroom cloud and hence into the core of the irradiation in the nuclear fireball. An aerial detonation, on the other hand, will gather up a much smaller amount of material and thereafter release a much smaller amount of particulate fallout.

The difference between an aerial and ground detonation is very large, changing the amount of initial fallout by some huge amount. And here, unfortunately, I have found a large amount of uncertainty, due to how nuclear testing has taken place. For the most part, we have avoided testing with true ground detonations, either detonating in the air using a metal tower, detonating underground, or in water. Data on true ground explosions is therefore not really available. (ref. The Effects of Nuclear Weapons, section 9).

I think we can get a bit of a handle on the variability by seeing the type of variation that happens within a fallout zone. Because of the effects of wind and terrain, fallout doesn’t happen evenly across a whole area, and areas of up to 10 times as much as average the amount of radiation have been observed in discrete “pockets” in the terrain where dust and debris collected because of wind and terrain patterns. In a hand-wavey way, then, I think this kind of justifies a variation of total initial fallout between air bursts vs. ground blasts of up to 10x.

Area affected by the initial fallout: a rough estimation

There is a useful rough approximation for the area expected to be impacted by fallout of a nuclear surface blast, available here: The Effects of Nuclear Weapons, p. 9.93. To use the diagram and table provided there, you map the “rads/hr” map in the diagram to the location in question, scaling the width of the plume to the actual wind speed of your example (the baseline is given for 15 mph). What I want to point out is that the length and width of the resulting ellipsoid shape are both given in terms of the yield of the bomb, raised to approximately the 1/2 power. The total affected area is therefore going to be roughly linearly proportional to the yield of the bomb. Keep in mind, however, that whatever rough estimate you get from this method is going to be roughly predictive of the amount of total fallout, given the variability in amount of material that ends up irradiated by the bomb based off of how it is detonated.

Durability of the initial fallout predictable

Acknowledging the wide variability in total fallout, can we nevertheless estimate how long the direct radiation from the afterglow of a nuclear explosion is likely to be problematic? In fact we can, for two reasons. First, while the amount of radioactive material generated by the explosion is proportional to the power of the bomb, yet–as we just noticed–the area over which the material is dispersed is also going to be roughly proportional to the power of the bomb. This means that the total area affected by this initial fallout is going to vary according to the power of the explosion, but the intensity of the resulting immediate fallout radiation is going to be rather consistent even between blasts of very different power.

Second, the time it takes for this radiation to dissipate is controlled by exponential decay. The rule of thumb (for the first six months after an explosion) is that for any given radiation intensity in an area of immediate fallout, a seven-fold increase of time interval will cause the amount of radiation to decrease by ten-fold. After six months, the falloff happens even more rapidly.

This exponential decay means that even fairly large variations in the initial intensity of the radiation will reduce to very similar small amounts in very similar time periods. Supposing you did have a full 10 times as much initial radiation from a particular type of nuclear explosion–well, all you would need to do is wait a week and the levels from the more potent blast will be down 1/10th and be merely equal to the amount the cleaner blast produced one day ago. Once you get beyond six months past the time of the initial fallout, there will be very little practical difference between residual radiation of this shorter-lived type.

In conclusion, I think we can say that the initial fallout from a nuclear weapon will present a very varied initial amount of radioactive fallout, but that the effects are going to be very similar in the near to mid-term: evacuate the area affected, and come back when the levels have become safe again, which will be a matter of some weeks or months.

Long term fallout effects

While the majority of the radioactive by-products of a nuclear explosion decay very rapidly after the initial period of danger (on the time scale of hours, days, or weeks for the most part), some radioactive elements present more medium-term and long-term hazards. The initial load of radioactivity from fallout is strong enough to cause health problems even as external sources of radiation–enough radiation can penetrate your skin to cause real health problems in the immediate aftermath of a nuclear explosion.

Once this radiation has died down to a power level at which your skin is an adequate protection against most of the damage it may cause, the concern then shifts to those radioactive materials still present which may be ingested or inhaled into and then persist in the body. Now, most of the radioactive materials in fallout can be ingested or inhaled; however, what is particularly damaging are those materials that have some specific use in living bodies, or which chemically mimic elements that have such specific uses.

The elements that are particularly problematic here are iodine-131, strontium-90, and cesium-137. Iodine is used by the body and when ingested, tends to accumulate in the thyroid gland. Strontium is in the same column of the periodic table of elements as calcium, and is therefore absorbed into your bones in the same way if you ingest it. Likewise, cesium is chemically similar to potassium and will be absorbed by your body in the same way. All of these elements can also be absorbed by animals and then re-emitted into the food supply, the most worrisome pathway being milk from cows ingesting contaminated food.

Amounts of radioactive material which would not be dangerous in the open environment can still cause significant health issues if collected directly into your vulnerable tissues and not eliminated rapidly. Each of these problematic materials have been shown to cause cancers of various types.

Of these three materials, iodine is more of a short-to-medium term danger. It has a half-life of about 8 days, so it decays away into irrelevance fairly quickly.  In terms of practical considerations, iodine-131 really should be counted more as a problem of the late "Initial Fallout" stage, rather than a true long-term effect.

Cesium and strontium, on the other hand, both have a half-life of about 30 years. They need to be considered a risk, therefore, for a much longer time.

Quantity of Cesium and Strontium

What quantity of cesium and strontium can be expected, based on the power of the nuclear bomb exploded? For fission bombs, this is not too difficult to ascertain. As a rough estimate, a fission bomb will generate about 125 pounds of radioactive isotopes per megaton of yield (see The Effects of Nuclear Weapons, p.9.12). About 6.3% of that will be cesium-137 (about 7.9 pounds) and about 4.5% will be strontium-90 (about 5.6 pounds). (Ref. “Fission products by element”)

However, most tactical weapons nowadays are not pure fission bombs, and here, unfortunately, we are forced to admit a lot of uncertainty. In this case, the amount of cesium and strontium produced is determined to a large extent by the specific type of explosion that is generated, and hence by the design of the bomb. Producing a nuclear explosion is not actually a simple procedure and typically involves a carefully designed sequence of explosions, rather than a single one. In a typical H bomb, the fission reactions that generate the cesium and strontium are mostly due to the flash-irradiation of a depleted uranium shell around the first fission reaction (which triggers a second fission explosion, which triggers the final fusion explosion). 

So to really know how much cesium and strontium you can expect from such a bomb, you need to know what percentage of the explosive energy of the bomb comes from fission as opposed to fusion, and in order to know *that*, you need to specific details about the design of the bomb.

Unsurprisingly, I have been unable to find the required detailed schematics for constructing an H-bomb online that would enable me to nail down specifics for this question.

In general, what I have read indicates that fusion bombs tend to be cleaner than fission bombs, but not as much as you might think or want. I would suggest, as a quick-and-dirty rule of thumb, that we just estimate the fallout products of a fusion bomb to be roughly half of what an equivalent yield fission bomb would be--"The Effects of Nuclear Weapons" makes just this estimate at one point. But I have to admit that this is just a very rough estimate.

Effects of Cesium and Strontium

I should note something important here: it appears that the health effects of these long-term nuclear products have been negligible, so far as we can tell from those few instances of real-life fallout impacting real people that we have observed. Now, this is a very small set of instances--basically, Hiroshima and Nagasaki, the Marshall Island bombing exposure incident, and Chernobyl, and I think that's it.  Almost all of the *demonstrated* adverse health effects from radioactive fallout in these cases has come from iodine-131, not from the cesium or strontium. (see The Effects of Nuclear Weapons, section 9).

So we don't really have enough data to *know* how much cesium and strontium pollution from a nuclear weapon or nuclear accident is enough to cause significant adverse health effects in the long-term.


Summary

The effects of the detonation of a nuclear weapon can then be summarized as follows:

  1. Short term, a massive radius of destruction, proportional to the size of the bomb.
  2. Medium term, a large area of dangerous fallout, which will render some area unsafe for a period of time, possibly for months but not for years. Iodine-137 is the biggest issue once a few weeks have elapsed from the time of the explosion.
  3. Long term, a certain amount of dangerous materials (primarily cesium-137 and strontium-90) deposited in the environment: about 8 pounds of cesium-137 per megaton of fission explosion, and maybe 4ish pounds per megaton of H-bomb explosion, and about 5.5 and 2.75 pounds of strontium-90, respectively. These amounts, however, do not seem to entail a lot of health consequences that we know about, given our admittedly limited experience.
This gives us a frame of reference with which to compare the consequences of a nuclear meltdown, which will be the next entry in this series.






Tuesday, May 24, 2022

Comparison of Nuclear Accidents versus Nuclear Bombs: Part 1

This is the first in a series of posts comparing the negative consequences that follow from a nuclear meltdown from a nuclear power plant to those following from the use of a tactical nuclear weapon.

The motivation behind this planned blog post came from Putin seizing various nuclear power plants is Ukraine.  Putin has been using its ownership of nuclear weapons as insurance against the major powers  engaging against its forces directly.  Since no one wants a full-out nuclear war, no power with nuclear weapons in the West wants to engage directly with another nation with nuclear weapons out of fear of what will happen.  

If Putin were to use a nuclear weapon *first*, this would instantly change that calculation.  But then the question was raised: if Putin is willing to shell nuclear power plants and maybe even engineer a nuclear meltdown or a dirty bomb and blame the Ukrainians (a speculated course of action), would that "count" as the use of a nuclear weapon, and therefore trigger a recalculation which would justify the direct involvement of NATO forces on the battlefield in Ukraine?

This comparison between a nuclear meltdown (potentially engineered) and the use of a tactical nuclear weapon is directly relevant to this question.

This post is going to clear up a few initial issues and explain why some available comparisons are inadequate.  I will then follow up with a post on estimating the consequences of nuclear weapons, followed by a similar post on nuclear meltdowns.  Finally, I will draw some tentative conclusions--but you should be warned, this is a difficult comparison to make and my conclusions will be very tentative.  In the end, I hope to have provided at least a framework for thinking about this problem, if not any other useful conclusion.

But let's get right to clarifying some initial points:

Tactical vs. Strategic Nuclear Weapons

So the first point to clear up is this term, "tactical nuclear weapon", which is a phrase that many people don't understand right away. In this phrase, the word "tactical" refers primarily to the intended purpose of the weapon, but also secondarily to the power of the weapon.

The opposite of "tactical" in this context is "strategic".  A "strategic" nuclear weapon is a large missile aimed at major population centers of your opponent's nation, and the goal of such a weapon is to be an existential threat to your enemy.  That is, by having such nuclear weapons, you have, as an option, the ability to threaten the end of your enemy's existence itself, if the existence of your own State is threatened.  This is the origin of the phrase "Mutual Assured Destruction", and it is the essential point of strategic nuclear weapons.

Tactical nuclear weapons, on the other hand, are designed to be used on the battlefield in order to achieve specific military objectives as part of a specific military campaign.  The classic example of such a purpose is the theoretical use of a nuclear weapon by NATO as a contingency to stop a mass blitzkrieg of Russian tanks from storming the Fulda Gap and taking Europe in a sudden surprise military operation.  [Note: this was back before we realized that a few shoulder fired missiles and some farm equipment can apparently do the job just as well.]  Such a use of nuclear weapons--for a specific battlefield need--would be a tactical use.

Because of the much more limited intended scope of application, tactical nuclear weapons tend to also be considerably smaller than strategic nuclear weapons (though strategic nuclear bombs can also just be the same or similar bombs, just delivered in clusters in order to be have a larger total yield as well as being harder to defend against).  It is important to realize that this size difference can be immense.  Nuclear bombs have been developed that range in size all the way from a 10 ton bomb to a 50 megaton bomb--yes, the largest nuclear weapon ever detonated is a full 5 million times larger than the smallest one ever.  This means there is going to be a necessarily large range of possible consequences to the use of a nuclear weapon: it matters a lot how big that weapon is.

Next point of clarification:

"Radiation released" is a poor metric of the seriousness of consequence

If you look at the wikipedia entry comparing Chernobyl and other "radioactivity releases", you will see that Chernobyl released about 400 times the amount of radioactive material into the atmosphere than the nuclear bomb dropped on Hiroshima.  So were the negative consequences of Chernobyl about 400 times worse than those of Hiroshima?  Well . . .

If you compare the loss of human life between Chernobyl and Hiroshima, on the other hand, you will see that the number of people who died immediately at Chernobyl was 28, followed by something like 14-23 more people over the next 10 years due to the radiation exposure.  Long-term increases in cancer rates due to Chernobyl are very hard to determine--there have been some wild estimates out there based off of "Linear, No-Threshold" calculations, but I don't find these worth considering.  Only regional increases in thyroid cancer, mostly in children, can be clearly linked to radiation released at Chernobyl, which accounts for about 4000 cases of cancer of a low mortality rate, resulting in 9 deaths.  Plausible estimates for the number of people we would expect to have somewhat shortened lifespan because of high levels of radiation exposure are in the range of 4000 people--but the average decrease in lifespan is pretty low.

At Hiroshima, on the other hand, between 90,000 and 140,000 people died immediately or due to the immediate aftereffects of the bomb.  Long-term increases in cancer account for maybe about 200 known deaths and 1700 extra cases of cancer.  There have been some studies that show an average decrease in lifespan among the thousands of other people exposed to radiation from the bombing, but average decrease is again, pretty low: "a few months".

So on the whole, while Chernobyl released "400 times" the radiation that Hiroshima did, the final death toll it produced was something in the area of 50-100 people (depending on how you count the small decreased lifespan of a large number of people), whereas the death toll of Hiroshima was roughly 1000 times higher.

Let's make another comparison!  Fukushima was another nuclear accident which has released a large amount of radioactive material to the environment; approximately 1/10th the amount that Chernobyl released.  If you could compare these accidents as apples-to-apples on the amount of radiation released, then you would expect around 5-10 people to have died from that result.  But, in fact, no deaths--and in fact no adverse health effects at all--are currently attributed to the radiation released by the accident, and none are expected (though a number of deaths have been attributed to the evacuation done out of fear of the released radiation).

Here's yet another comparison point.  In the '50s and '60s, many nuclear tests were carried out by the nuclear powers.  A lot of the radioactive fallout of these tests were propelled into the upper atmosphere and diffused across the entire world.  In total, nuclear testing is thought to have released between 100 and 1000 times as much radioactive material into the atmosphere than Chernobyl did.  If you could compare the consequences of a nuclear event based on radiation released, you would therefore expect the number of people that have died as a result of this released radiation to be something like 5,000 to 100,000.  But, in fact, the best calculations we have done on this show that there has been zero health effect on the general population as a result of this testing, at all, ever.

So the bottom line here is that "radiation released" is a stupid metric.  It does not correlate meaningfully with negative consequences in the historical examples.  It should never be used because it gives the illusion of an objective grounds of comparison but in fact doesn't tell you anything useful by itself.

First Conclusion

Therefore, the first conclusion of this series is that if you want to reasonably compare the seriousness of a nuclear meltdown vs. the use of a nuclear weapon, you need a better framework of comparison than just "radiation released".  We will begin to build this framework in the next post in this series.




Saturday, March 12, 2022

Key Points You Need to Know When Talking About Energy: Half-Life, Linear No-Threshold

In this installment, I want to discuss some aspects of nuclear pollution that are misunderstood.  There are two topics in particular that need to be understood to be able to properly evaluate the severity of a particular release of radioactive material.


Half-life

Definition

As I said earlier, nuclear fission happens at the atomic level because certain configurations of nuclei are more unstable than others and naturally break apart over time.  This is a random process at the atomic level, but is governed by a strong law of averages, so that once you get up to the macro level of particles that make any impact on us, the rate of decay is a well known constant depending on the material.

Half-life is the time after which any given particle of a certain isotope is 50% likely to undergo spontaneous fission.  It's called half-life because if you start with a certain amount of a radioactive isotope, after this time half of it will be left, the other half having decayed into other elements.

Because the radiation emitted by a particular isotope is caused by this same decay, the half-life of a radioactive material is also the half-life of the radiation it emits.  A radioactive material will therefore become less radioactive over time as it decays.  The formula for figuring out how much less radiation will be emitted by an isotope after a certain amount of time is fairly simple.  First, calculate how many half-life intervals will have passed in that interval, then take that number and raise the fraction 1/2 to that power.  For example, if some material has a half-life of 1 year and 3 years has elapsed, then the total radiation emitted by that material with be (1/2)^3, or 1/8th of the initial radiation.

A practical example

In more practical terms, you might want to know how long it will be before radiation of a certain material will fall to safe levels.  The Greek letter lambda, λ, is the symbol for half-life.  If T is the total time elapsed, then the formula would be:

Safe Level = Initial Level * (0.5)^(T/λ)

Solving for the total time yields this formula:

Safe Level / Initial Level = (0.5)^(T/λ)

Log0.5(Safe Level / Initial Level) = T/λ

T = (Log0.5(Safe Level / Initial level)) * λ

The safety implication of this is that the length of time a particular isotope is problematic depends greatly on the half-life of this material--it is directly proportional.  And the half-life of different materials emitted by a nuclear incident varies *incredibly*.  Let's illustrate this with some real-world examples.

The Three Mile Island incident emitted most of its radioactive material in the form of radioactive Xenon, to the tune of something like 14 μSv (that's "micro Sievert") per square meter over a large area (estimated to affect about 2 million people).  The average daily dose of radiation the ordinary person gets just from regular background radiation is about 8.5 μSv, so this was definitely a slightly higher level of radiation than is normal.  For how long, though, were those people exposed to higher levels of radiation?

Let's assume for the sake of argument that the 14 μSv figure was the daily exposure (it wasn't, by the way, but let's go with that for now).  Let's say we wanted to know how long it took that 14 μSv to drop down to 0.1 μSv--this would make the level of increased radiation insignificant compared to average daily radiation.  Radioactive Xenon has a half life of about 12 days, so plugging this into the formula, we would get:

T = Log0.5(0.1 / 14) * 12

T = 85.5

This means that in 85.5 days, the radiation levels from Xenon released by the accident would be below a level that would cause us concern.

At Chernobyl, on the other hand, Caesium-137 was released in great quantities.  Caesium-137 has a half-life of 30 years.  If Three Mile Island had released the same amount of radioactive material, but in the form of Caesium-137 instead, the time needed for the same decrease in radiation levels would have been about 2600 days, not 85--so, 7 years instead of two and a half months.

Reminder about material types

This would be a good time to recollect that radiation impinging upon the body from outside doesn't do anything near the harm that radiation emitted by materials absorbed into your body.  Therefore the level of danger presented by a particular radioactive material is determined by the amount of radiation it emits, the half-life, and how and to what extent that material is absorbed by the body.

As a practical example, at Chernobyl, three elements of particular concern were released:
  • Iodine-131.  It has a short half-life (only 8 days), but it collects in the thyroid when absorbed by the body and is not easily removed.  It can do permanent damage to non-regenerating tissue of the thyroid gland.
  • Strontium-90 has a long half-life (29 years), and can lead to leukemia in high doses.
  • Caesium-137 has a half-life of 30 years, and can harm the liver and spleen.
And to apply the half-life principles again: it's been 36 years since the accident, meaning that the amount of Iodine-131 emitted by the accident has halved 1644 times in the interim.  This means it has completely vanished--1.27 x 10^-495 is the actual multiplier, which is close enough to zero as makes no difference, since this number would imply that far less than a single atom of Iodine-131 is still left. 

The levels of Strontium and Caesium radiation, on the other hand, will have decreased only to about half of what they were on the day of the accident: a welcome decrease, but not nearly enough so that we can stop worrying about it.

The Linear No-Threshold Model (LNT)

The problem of evaluating "widely dispersed but thinly spread" harm.

Given that some radioactive waste has a long half-life, we have to be concerned that it can spread around the world and impact a large number of people before it decays into safety.  For example, a lot of people were concerned about the amount of Caesium-137 that was released into the open ocean from the Fukushima incident.

As material disperses, however, it also thins out considerably.  So we have to be concerned about many people getting tiny amounts of exposure to radioactive material.  How big of a problem is this?

The honest scientific answer is, surprisingly, we don't really know.  This is one of those areas in which laymen often are surprised at the lack of definitive answers coming from the scientific community, because it seems like a simple and obvious question and it doesn't seem as if the answer should be too difficult to find out.  But this is simply not the case.

Why don't we know?


This lack of knowledge is perhaps not so surprising if you think through the practicalities of how science works.  Scientific knowledge is primarily advanced through experimentation.  If you have a question you want answered, you design an experiment that replicates the conditions you want answers about, plus a control that is just like your experiment but without those conditions, all with the goal of comparing the results against the control and seeing what the test conditions did.

This fundamentally makes coming to a scientific understanding of how harmful certain things are to humans very difficult to do, because you are not ethically allowed to perform an experiment in which you expect any harm to come to your test subjects.

You can try to do experiments on lab animals, but it is never a safe bet to extrapolate how things affect a lab rat onto how those same things will affect humans.  Experiments on lab animals can only be a preliminary for experiments on humans--this we know from long experience.  The saying among experimental health science folks is that mice lie and monkeys exaggerate.

So what do scientists do, when trying to ascertain the health risks of low-dose radiation?  This is where the term "Linear No-Threshold" (abbreviated LNT) comes in, and it's controversial.

How we fudge an answer anyway.


What has been done is to take known instances of radiation exposure and put them on a graph, with the amount of exposure on the horizontal axis and the imputed harm (in terms of likelihood of death) on the vertical axis.  Since scientists can't ethically create these circumstances, we have to rely on outcomes of known nuclear accidents for the data.  This is a pretty limited set of events; you can get a pretty complete summary of them all here: Nuclear and Radiation Accidents and Incidents.

Then, we plot the outcomes of these events for the people exposed, based on how much radiation they got.  You normalize deaths with severe illness in some fashion, such as, you try to guess how many years were taken off the total likely lifespan of someone who got cancer and died some years after exposure and then convert that to some fraction of a death.  This is not an exact science!  There are several places where you need to insert some common sense rules-of-thumb.

Then, after you have put all these instances of known exposure levels and outcomes on a graph, you draw the best straight line through these data points that terminates at the "zero exposure, zero danger" point at the origin.  It ends up looking something like this:

The straight-line nature of the projection is why this model for radiation harm is called "Linear", and the fact that the projected harm only goes to zero when radiation goes to zero is the "No-Threshold" part.

This model is quite controversial, and in fact, almost certainly wrong.  I don't think anyone serious believes that this model accurately conveys the actual amount of harm that low levels of radiation causes.  From what I've heard, even people who champion this model of guessing at the harm caused by low levels of excess radiation invoke the "precautionary principle" in order to do so, meaning that they think we should assume the maximum possible harm coming from some situation if it is an unknown.

There are multiple academic papers out there arguing against the LNT model, one of which I will link to here: It Is Time to Move Beyond the Linear No-Threshold Theory for Low-Dose Radiation Protection.  I am going to add on some of my own reasoning against the LNT model here:

  1. Nothing in nature that we know of acts in this way.  For every dangerous material that we know of, there is always *some* threshold at which it becomes harmless.  Arsenic, for example, is a very deadly poison.  It is also present in every single glass of water you drink, without exception--in trace amounts. The saying in the medical world is, "the dose makes the poison".  A low enough dosage doesn't mean "just a little bit poisonous", it means "not poisonous at all".

    Indeed, there are all sorts of things which, if graphed on such a chart as the one above, would be roughly U-shaped.  Vitamin D, for example, is poisonous at high doses, and in high enough doses can kill you pretty quickly.  But at a certain level, it becomes actually beneficial for the human body, meaning on a chart such as the one above, it would curve below zero on the "harm" scale.  Then if Vitamin D levels get too low, the fact that you are missing out on Vitamin D would curve the "harm" back up into the positive range. 

    U-shaped curves are much more common in nature when it comes to the right amount of something to have.  Consequently, for very low levels of radiation, it is more likely that the harm produced is either literally zero or else actually negative.

  2. The LNT is abused by people to exaggerate the impact of nuclear accidents.  I have seen this done with Chernobyl, Three Mile Island, and Fukushima.  If you oppose nuclear energy and you want to exaggerate the negative impacts of these accidents, you can take advantage of the fact that modern radiation detection is incredibly sensitive.  We can detect trace radiation from even vanishingly small particles of matter, even down to the individual atoms.

    Consequently, it is a certainty that some amount of technically detectable radioactive material from at least Chernobyl and Fukushima (I'm not sure about Three Mile Island given the lower half-life of released materials) have gotten into every human on the planet.  The wind constantly blows and the seas constantly move, so eventually these things find their way literally everywhere on the planet.

    What some people have done, therefore, is take that extremely low level of radiation from the world-wide dispersion of these events, then look up the projected harm from the LNT graph of radiation effects.  This will be a very low number, but they will then multiply it by 6 billion people in order to get a total death toll from Chernobyl or Fukushima.  In both cases, if you do this, you end up with a number significantly larger than the official death tolls of either event.  These are not numbers that are justified; they wildly overstate the probable impact.

Conclusion

Not all nuclear accidents are the same.  If you want to understand the severity of a nuclear event, you need to know more details than just "there was a meltdown" or "nuclear materials were released".  You need to know what materials were released, and you need to know in what way they were dispersed.  You also need to be aware that the severity of total harm to humanity from some of these terrible accidents has been greatly exaggerated, and that the Linear No-Threshold model is largely to blame for that. 





Tuesday, March 8, 2022

Key Points You Need to Know When Talking About Energy: Neutron Radiation, Fissile Fuel, and Fusion

I want to build off of the previous post by discussing some key elements of nuclear energy related to one type of radiation: neutron radiation.  Neutron radiation is the key type of radiation that makes nuclear energy possible, and it's good to know how.  I'm then going to discuss two ways in which this understanding impacts the safety of nuclear power plants, and I will end with a brief discursion into some problems with theoretical future nuclear fusion plants.

Fissile Fuel

What it is

I said earlier that neutron radiation is able to penetrate into materials because it ignores the electron cloud around atoms; it only stops if it hits an atom's nucleus, and when it does it can either bounce off, be absorbed into the nucleus (thus changing the atom to be another element one higher on the periodic table), or split the nucleus up--nuclear fission.  I also said that neutron radiation is created by the breakup of nuclei into components, which releases a mix of nuclear output including energetic neutrons.

So you can see that neutron radiation both causes fission and is caused by fission.  This is the key fact that makes nuclear energy possible, because if you balance things just right, you can create a self sustaining reaction, where your fuel is undergoing fission continuously, as some atoms break apart, releasing neutrons which cause neighboring atoms to break apart, and so on and so forth.

Conceptually, creating a self-sustaining nuclear fission reaction is very simple.  All you have to do is find enough naturally occurring unstable material that spontaneously decays at a rapid enough rate and bring it physically together.  The physical proximity of unstable material in a dense enough arrangement is enough to fire off a self-sustaining reaction.

How it's made

Originally

Getting this material is difficult because it doesn't exist in dense enough arrangements in nature for a self-sustaining reaction to occur--which should be obvious, if you think about it, because if such a reaction *did* occur in nature, it would burn itself out in short order and cease to exist.

In order to get enough material that can cause a self-sustaining reaction, you have to take advantage of the fact that unstable isotopes of Uranium weight slightly more than stable isotopes of Uranium--because they have more neutrons per atom.  So, theoretically, it's quite simple to separate unstable Uranium from stable: just dissolve mined Uranium and put it in a high-speed centrifuge.  The heavier isotopes of Uranium will settle to the bottom of the result if you do this fast enough and for long enough.

In practice, this is much more easily said than done, and the engineering know-how in order to do this sort of refinement properly is a critical "controlled" secret that we try to keep from being common knowledge.  So if you hear about negotiations with Iran or some other state that has ambitions to become a nuclear power and you hear about "centrifuges", this is what is being discussed.

An important distinction to keep in mind here is that the density of neutron active material that is required to run a nuclear power plant is not as great as the density required to get fuel that will explode in a nuclear bomb.  Getting the explosive chain reaction is a step above the difficulty of getting a self-sustaining chain reaction.  However, the technological step you need to get weapons-grade material is *not that high* above the step you need to get power-plant grade material.

In a power plant

Another way in which fissile fuel can be manufactured, however, is in a nuclear power plant (if designed just right).  Because these things generate constant neutron radiation and because atoms hit with a neutron sometimes absorb the neutron rather than splitting, nuclear power plants can be configured in such a way as to generate unstable isotopes that can be used for nuclear fuel (or nuclear weapons).  Such things are called "breeder plants", and they do have to be specially designed to work in this way: you can't just take any old nuclear power plant and roast your Uranium over the nuclear fire and get weapon's grade Plutonium out.  

However, it must be said that from a distance, it is difficult to tell the difference between a breeder plant that is creating more fuel for nuclear power and one that is creating material suitable for a nuclear bomb.  Hence, there is a lot of concern when a state with nuclear ambitions says they are just looking to get into nuclear power plants.  I'll be getting into this in more detail later, but for now you have to realize that this is a concern.

Some safety considerations based on the nature of fissile fuel

Reactor core composition and types of core failures

Nuclear cores are made so that their effective neutron density--that is, how much fissile material is being exposed to neutrons at any given moment and hence how much fission is happening at that moment--is controllable.  This happens in several key ways (warning: painful oversimplification follows!):
  1. The physical configuration of the core.  Most nuclear cores have control rods which are made of a material that is called a moderator, which is a material that has the property of impeding neutron radiation.  In a typical configuration, the rods are arranged over the core, which has holes to receive the rods.  If the rods are completely removed, the core has enough reactive material density to "go critical" and have a self-sustaining reaction.  If the rods are inserted, however, the reaction slows down.  If they are inserted all the way, the reaction is no longer self sustaining and will die off.  You can control the power output of the reactor core by controlling the height of the control rods.
  2. Liquid coolant / moderator.  In addition to the control rods, most reactor designs also have a liquid coolant constantly flowing around the core.  This serves, at the minimum, the purpose of cooling down the core so that it doesn't physically melt, as well as taking away the heat energy to be turned into energy.  Depending on the design, this liquid will often also play some moderating role on the reaction.

Every core meltdown, therefore, depends on something disturbing the necessary balance between reactive material density, presence of moderating material, and coolant flowing through the core.  So, for example, if for some reason you have a failure in the mechanism that injects the control rods into the core, the reaction can be stuck in "on" mode and start heating up out of control.

Active vs. Passive

When evaluating the possible outcomes of a failure with a nuclear core, you have to look at the possible ways in which the mechanisms that maintain this balance can fail.  The key distinction here is, are these mechanisms active (meaning, some process needs to happen to in order for the moderating influence to be applied) or passive (meaning, something automatically happens to trigger moderation when needed)?

Passive is always to be preferred to active, when possible.

Uncontrolled Reaction Spike vs. Decay Heat Meltdown

Different reactor types have different safety profiles because of having different active vs. passive safety mechanisms protecting against different things.  Most people lump together all "meltdown" events they hear about into the same category, but there are massive differences.

The nuclear power plant in Chernobyl was of a type called "graphite moderated", and it did not really have any passive safety mechanisms at all.  In particular, there was not a good way to keep the core from undergoing uncontrolled cascading reaction without the rods in specific configurations.  There was a portion of the range of motion through which the rods went which actually spiked the reactivity of the core by displacing moderating water.  This was known and procedures for avoiding this spike had been figured out in another plant, but the procedures had not been transferred to Chernobyl.  This failure compounded with others led to an uncontrolled reaction spike, or (simply put) an explosion.  The nuclear reaction got so out of control at a particular spot in the core that it actually blew itself up.

Other reactor designs do not have this same instability.  Pressurized Water Reactors (PWR) and Boiling Water Reactors (BWR) both have a passive safety feature that prevents this sort of extreme power excursion.  For these reactors, the water moderators are essential to the operation of the core.  If the temperature spikes, the water develops steam cavitation bubbles immediately, which increases the moderating effects of the water and dampens the reaction.  This happens via physics and not human procedures.  Consequently, these reactor designs are quite unlikely to explode due to some core malfunction.

On the other hand, this does not remove the possibility of a meltdown.  The reason for this is that even after the self-sustaining nuclear reaction has been halted by the insertion of control rods, the nuclear core is still extremely hot.  This is called the decay heat of the reactor--the heat it generates as it gradually cools off.  If you want to prevent the core from melting down, you must find a way for it to deal with *this* heat even after the core has been "shut down".  In older reactors, even after a core has been shut down it must be *actively* cooled for a substantial time before the reactor is cool enough to be safe.

If a reactor looses cooling for long enough, even if it is "shutdown", it can melt and begin to damage the containment vessel in which it is held.  This is how radioactive material was released both in the PWR Three Mile Island power plant, and in the BWR Fukushima disaster.  In the case of Three Mile Island, coolant was lost for too long (due to a comedy of errors and accidents), leading to the reactor automatically "tripping" and shutting down the core.  Coolant was *still* not being supplied after the automatic shutdown, however, so the core began to melt through the first containment vessel, causing hydrogen explosions and the release of small amounts of radioactive gases.

In the case of Fukushima, the reactors were shut down on purpose as standard practice because of the earthquake.  However, when the tsunami hit, all of the generators powering the coolant pumps became submerged and stopped functioning.  The shut-down reactors therefore melted, one of which even damaging its containment vessel sufficiently to cause some leakage of radioactive material dissolved in water.

The difference between Chernobyl and the two other accidents was thus the difference between a core explosion and a core meltdown.  This made a dramatic difference in the amount, type, and dispersion of radioactive material released.

The similarity in all three of those incidents, however, was that in each case it was active systems that failed to maintain reactor safety: active moderation of the core in the case of Chernobyl and active cooling of a shutdown core in the case of Three Mile Island and Fukushima.  Consequently, the focus of nuclear reactor design for a while has been to design systems that are passively safe throughout.  I will go into more detail on this in a future post.

For the sake of current events, we should know that the nuclear power plant recently seized by the Russians in Zaporizhzhia is a PWR plant, and consequently has passive protection against runaway reactor scenarios.  Therefore, this plant is unlikely to experience an explosive core failure of the type that Chernobyl did, even in the case of complete loss of coolant due to equipment destruction.  However, a meltdown and some leakage of radiation is certainly possible.

Spent fuel rods

One more safety consideration presents itself based on what I have just discussed.  That is, even after reactor fuel is no longer being used to perpetuate a fission reaction, it is still quite radioactive with the remaining "decay heat".  This implies that after fuel can no longer be used in a reactor, it cannot be instantly disposed of.  In fact, it takes about three years before spent fuel can cool down sufficiently to be transported for disposal.

How nuclear plants deal with this is something called "Spent Fuel Pools".  The spent fuel is just stored in big pools on-site for long enough for the decay heat to die down.

The safety consideration here is that most functional nuclear power plants have radioactive material just sitting around on-site, cooling off in pools.  The problem is made worse by the fact that the original intention for all of this spent fuel was to transport it to long-term storage facilities.  However, for many decades now, people have actively resisted the creation of these facilities in their own backyards.  Constant legal battles and political unpopularity have dramatically limited the construction of suitable long-term storage for spent fuel.  This means that many nuclear power plants are storing spent fuel on-site for years and years longer than anyone originally anticipated.

The safety implications are fairly large.  There isn't a lot of potential for this spent fuel to accidentally leak, though that was a concern at Fukushima because of the tsunami.  However, the potential for misuse by terrorists or bad state actors is quite high.  This material is perfect for the creation of a "dirty bomb", which is just radioactive material that you pack with conventional explosives in order to create radioactive fallout without the nuclear explosion.

Again for the sake of current events, the Zaporizhzhia nuclear power plant has six cooling pools with hundreds of tons of spent fuel of varying degrees of radioactivity, now currently controlled by the invading Russian army.

A quick discursion on fusion power

This is a bit out-of-the way, but since we are talking about neutron radiation and nuclear safety, I'm going to take the opportunity here to briefly discuss nuclear fusion.  There has been some news recently about new milestones achieved in energy-positive fusion reactions that *might* cause some people to become hopeful that fusion is coming as a clean alternative to nuclear fission before too long.

I am here to dash those hopes now.

Long-term, I hope that fusion does eventually come through.  However, recent small milestones notwithstanding, there are more barriers for fusion to overcome than most people realize.  One key barrier has to do with neutron radiation.

Fusion is called "clean" with respect to fission because it does not rely on actinides--the heavy, radioactive metals that are the fuel for nuclear fission.  Instead it fuses isotopes of lighter elements into heavier elements.  Although this process does not use or create the same nasty materials that fission uses and/or creates, it does create neutrons: a lot of neutrons.  In fact, a fusion power plant will produce more neutron radiation, and higher energy neutron radiation than a comparable fission power plant.

As we learned earlier, this *will* make things exposed to it also radioactive.  But we also learned that this type of radioactivity is temporary and not of the same long-term danger as fission by-products.  So is the neutron radiation produced by a fusion plant a problem?  Yes, for at least two reasons.

Cost

A fission reaction is comparably simple to do--all you need is a lump of the right material at the right density, and the reaction will create itself.  Consequently, you have a lot of options for how you can design a fission core and still have it work--you can shape it all sorts of ways, dunk it in all sorts of moderating fluids, etc.

Fusion, on the other hand, is incredibly difficult to achieve.  To achieve fusion, you need incredibly precise conditions in an incredibly controlled magnetic field and hard vacuum--no moderating liquids allowed in there!  The world's leading fusion reactor, ITER, is by some counts the most expensive scientific instrument ever created.  Should it ever achieve constant operation, all of that expensive equipment would fairly quickly fry from the radiation that it is creating.  (That's why it's not ITER, but it's planned successor that might one day run continuously.)

The problem of continuous operation of fusion has not *at all* been solved--really, fusion scientists would just be happy if they were in the position to be able to worry about that problem, since only tiny bursts of continuous power output have yet been achieved.  This is a huge problem, and all proposals to deal with the problem so far have been enormously complex and expensive.  Getting fusion *practical* might be a further 30-year problem after we manage to make it merely energy positive.

Proliferation


The other issue with plants that generate neutron radiation is that neutron radiation is what is necessary to breed nuclear fuel.  Since fusion plants will generate neutron radiation in abundance, does this mean they could also be used to create weapons-grade Plutonium out of mined Uranium?  

Yes.  It absolutely does.  Fusion plants will be able to work as breeder plants *very* well.

This means that although fusion plants will not need any of the same materials that are used in nuclear weapons as do fission plants, they nonetheless represent a nuclear weapons proliferation challenge.

The bottom-line for fusion is that you can pin no hopes for it to be a viable part of the world's energy mix any time soon, and even when or if it does become viable, we will still need to worry about it enabling nuclear weaponry.

Friday, March 4, 2022

Key Points You Need to Know When Talking About Energy: Radiation

Nuclear power plants are part of the mix of our current energy solution.  Some people want nuclear power to go away; some people want it to increase.  The critical factor in nuclear power is safety, so it is important to understand the danger of nuclear power.  So I'd first like to go over some fundamental facts that are sometimes poorly understood.

Radiation vs. Radioactivity

I think the most fundamental misunderstanding a lot of people have is this distinction, so let's clarify it first.

Radiation

Radiation is any sort of energy that comes out from a source and travels through space.  "Radiation" is a very generic term and covers several different types of physical phenomenon.  You can sub-divide radiation based on two primary things: *what* is radiating out from the source carrying this energy and *how the radiation interacts with materials with which it comes into contact*.  

What

The things that can be radiated out are:
  1. Photons.  Photon radiation is just light, really, except that we reserve the term "light" for photon radiation that happens to be of a frequency that is visible to our eyes.  There's no fundamental difference, though, between photon radiation of all sorts, except for its frequency.  Photon radiation is called, in ascending order of frequency: radio, microwave, infrared, visible light, ultraviolet, x-ray, gamma.  These are all fundamentally the same thing, though the interaction of photon radiation with various materials can vary greatly based on the frequency.

    Another thing to be aware of is that, per photon, the energy of photon radiation goes up as the frequency goes up.  That is, gamma rays are very much more energetic than visible light rays, per photon.

  2. Subatomic particles.  These types of radiation are collectively called "particulate" radiation.  Different types of subatomic particles have different behaviors as radiation and are called different things:
    1. Alpha radiation: positively charged particles.  This is normally in the form of two protons and two neutrons.
    2. Beta radiation: negatively charged particles--electrons or anti-protons.
    3. Neutron radiation: neutral particles.  These are usually neutrons, though technically neutrinos would count here also.

Interaction

There are two main considerations here:
  1. Ionization.  The primary way radiation is distinguished here is to separate it into "ionizing" and "non-ionizing" radiation.  An "ion" is an atom with a net positive charge: an atom missing one or more electrons.  "Ionizing" therefore means "able to knock electrons off of molecules".  Whether or not radiation can do this is a key distinction because all chemical interactions involve relationships of atoms to each other at the layer of the electron shell.  Radiation that can create ions can therefore cause chemical reactions at a cellular level and therefore damage tissue chemically.

    In contrast, getting too much infrared, microwave or visible light can damage you by conveying too much heat.  It cooks your tissue, rather than chemically changing it.  This can be just as damaging--depending on the amount of heat we're talking about--but it is a different *type* of damage.

    In the spectrum of photon radiation, x-rays and gamma rays are ionizing, but lower frequencies are usually not.  Alpha and Beta radiation are definitely ionizing (being charged particle radiation, they definitely interact with the electron layer of atoms).  Neutron radiation is also considered "ionizing radiation", but it actually is only so in an indirect way, which I will discuss a bit later.

  2. Penetration.  The different types of radiation are able to penetrate into a body very differently.  Because Alpha and Beta radiation are both charged particle radiation, they both react very quickly with physical objects and can not penetrate very far into anything.  Beta radiation penetrates a bit more deeply than Alpha radiation (it can penetrate a little into the skin), but not very far and both are stopped by even normal clothing.

    On the other hand, Neutron radiation passes right through most normal material.  Neutrons do not react to electrons at all and pass right through the electron layers of atoms as if they weren't there--and the electron cloud is by far the bulk of the volume of an atom, the nucleus being only a tiny mass in the center.  Neutrons passing through your body, then, only interact with you at all if they happen to collide with an atom nucleus as they are travelling.

    This can cause damage, however, and if there is enough neutron radiation, it can definitely be fatal.  This type of radiation is the type that must be stopped by thick layers of material or particularly dense materials.  Lead is used for this because lead has a very high atomic number and therefore has a very large nucleus: a bigger target for the neutrons to hit.

    For photon radiation, the higher the frequency of the light (in general) the more it is able to penetrate into and through normal objects--this is why we use x-rays to look inside of things.  Gamma rays are more penetrating still.

Radioactive decay and Radioactivity

Now here let's deal with the primary point on which people are confused about radiation.  Radiation is energy radiating out from a source.  "Radioactivity" means some property of a material that causes it to release radiation, and this usually results from radioactive decay.  Usually--but not always!  By these definitions, an x-ray machine is technically "radioactive" when it is turned on but not when it is turned off.  The same, technically, for a light-bulb, which would be "radioactive" when on but not (much) when off.  Be aware that there is some fuzziness to the usage of this term, because it's *normally* used just for things that are radioactive because of radioactive decay.

Ok, so now: what's radioactive decay?

Radioactive Decay

Atoms are composed of a nucleus surrounded by an electron cloud.  Nuclei are composed of various subatomic particles that naturally want to fly apart that are nevertheless bound together as a clump of protons and neutrons by the incredibly strong (and appropriately named) "strong force".  Nuclei are all inherently instable and they will all eventually break apart on their own.  *How* unstable a particular nucleus is depends on its size and structure.  Generally speaking, the larger the nucleus is, the less stable it is.  This is not a simple relationship, however, and there are lots of exceptions and peaks and valleys of instability as you look over the table of elements.  However, in general it is true that larger elements will tend to want to break apart into smaller elements.

Perhaps more important than the overall size of the nucleus, though, is its geometric configuration.  This means that the number of neutrons compared to the number of protons in a particular nucleus greatly influences how stable it is.  The number of neutrons in a particular nucleus determines its isotope: two different atoms with the same number of protons but a different number of neutrons will be different isotopes of the same element: chemically the same (because the electron layer is the same),  but different at a nuclear level, and usually of very different stability.

Radioactive decay is what happens when an atom breaks apart.  An atom of one heavier element will split apart into two or more atoms of lighter elements, at the same time releasing some energy in the form of a mix of Alpha, Beta, Gamma and Neutron radiation.  What elements are formed by the decay and what is the exact mix of radiations produced depends on the element that is decaying and the form of the breakup--normally, even for a given isotope, there are multiple "failure modes" of the nucleus and therefore multiple types of by-products of its breakup.  These happen at a predictable percentage, though, so we can characterize the total radiation byproduct of a given isotope as the weighted average of these different nuclear "failure modes".

All elements are subject to eventual decay; therefore all materials are inherently radioactive.  However, some elements and isotopes are subject to decay much more frequently than others, and therefore we typically reserve the word "radioactive" for elements and isotopes that decay rapidly and therefore emit radiation at a rate that concerns us.

But do keep in mind that this is a distinction of degree and not of kind.  Radioactivity is an inherent property of all matter.  Phosphorous, for example, is high up on the scale of radioactivity among very ordinary elements, and because bananas have high levels of phosphorous you will get a reaction out of a Geiger counter if you hold it up next to a banana.

Nuclear power, including both nuclear power plants and nuclear weapons, make use of materials with very, very high rates of radioactive decay, and therefore necessarily involve materials which are constantly emitting high levels of radiation.

How radioactivity and radiation are harmful in different ways

So now we are equipped to understand the fundamental ways in which radioactive material is harmful in a different way from simple radiation.  Radiation, we saw, can be harmful in that it damages cells on a chemical level by ionization.  If you get a large dose of radiation, this can cause a lot of damage to your body, possibly permanent and possibly lethal.  However, such damage is a one-time event and it does not perpetuate in your body.

The situation is different if, for some reason, you inhale or ingest the dust of some radioactive material.  We said that Alpha and Beta radiation doesn't penetrate far into the body, being mostly stopped even by just skin.  However, if you happen to inhale or ingest particles of some radioactive element, those particles will be continuously decaying and producing continuous levels of Alpha and Beta radiation *inside your body*.  This causes a lot more damage because the internal tissues of the body are not designed to withstand radiation damage in the same way the skin is.

Furthermore, the most radioactive materials are, as I said, the heavy elements--they're called "the actinides" on the periodic table.  A major health problem with these heavy metals is that *the body does not have a good way of eliminating them*.  This is why lead poisoning is a problem, by the way, even though lead is pretty inert chemically: the body doesn't have a good way of eliminating lead from the system and so any lead dust you you ingest or inhale tends to slosh around inside your system forever and clog things up.

So this is why the real long-term horror of a nuclear weapon or a catastrophic nuclear power plant failure is not the explosion, but the fallout.  It is the spreading of radioactive dust around that gets into everything and cannot be easily purged if allowed entry into your body.

How radioactivity "spreads"

This leads to the final confusion between radiation and radioactivity I want to cover, which is how radioactivity spreads.  I think everyone is dimly aware that radioactivity "catches" somehow--that if you bring something into contact with a source of radiation, for some reason the thing "infected" with the radiation will itself become radioactive.

This is true for both radioactive things *and* for a very specific type of radiation, but for very different reasons.

For radioactivity, the way radioactivity spreads is that radioactive particles get into things.  So if you have some plutonium dust, and it gets on something, that item will have plutonium dust on it.  It will therefore emit radiation because of the plutonium dust and will therefore "be" radioactive.  If you are able to wash all of the plutonium dust off of this thing, then it will no longer be radioactive--the radioactivity is strictly contained in the plutonium dust. 

The difficulty is with being able to separate out the radioactive particles from the non-radioactive ones; there are always ways to do this, but many of them are not feasible at a large scale.  If you can't get the radioactive particles out, then you are stuck with the radiation for as long as the half-life of the radioactive material dictates.

Radiation itself does not cause other things to be radioactive, with the sole exception of Neutron radiation.  We said before that neutron radiation interacts with materials by impacting into the nucleus of the atom, because it doesn't interact with the electron cloud.  When it does this, it either bounces off, splits up the atom there and then, *or* merges with the nucleus, causing the atom to become a new isotope of the original element.

This last scenario is why neutron radiation causes things to become radioactive.  When neutron radiation is absorbed by an exposed material, the isotope that is created is usually not stable at all.  The absorption will only "take" for a little while, and then the element will decay in a bang of Alpha, Beta and Gamma radiation.  And *this* is why neutron radiation is considered ionizing radiation, because the primary by-product of the nuclear decay that it causes is other forms of ionizing radiation.

There is an important distinction to understand here, though.  Nuclear material that comprises a nuclear bomb or a nuclear power plant is almost all actinide--those heavy metals that release tons of radiation and may take millennia to decay to the point of safety.  If some ordinary material is made radioactive by neutron radiation, it becomes temporarily radioactive but it does *not* become an actinide.  This is why something that is radioactive because of exposure to neutron radiation is so for a matter of days or maybe weeks only.  Nuclear material of this kind poses a radically different (and lesser) threat than does other types of nuclear waste.

And remember: this type of radioactivity "spread" is specific to neutron radiation *only*.  You can expose an object to x-rays or gamma radiation for as long as you want and you will *never* make that object radioactive, no matter how much damage you cause otherwise.

Wednesday, March 2, 2022

Key Points You Need to Know When Talking About Energy: Energy Density

One of the main uses of hydrocarbon fuels is transportation.  If we want to eliminate the burning of these fuels as a way of generating energy, we can't just replace hydrocarbon burning power plants with other power plants; we would also need to replace hydrocarbon burning vehicles with other vehicles.

A key reason this is more difficult than it might seem is energy density.

Energy Density


Energy density is, simply, the amount of energy that can be extracted from a given unit of material.  You can talk about either energy density per volume or energy density per weight, but energy density per weight is more frequently used.

Energy density can also be reported in different types of energy.  Batteries are described in terms of Watt-hours per kilogram, typically, whereas things like gasoline are described in terms of Joules per kilogram.  Both are units of energy and are directly convertible (1 Watt hour = 3600 Joules), but Watts are used for electricity and Joules are usually used for heat.

Some reference energy densities

Batteries

The most energy dense batteries out there are Lithium Ion batteries, which range from 100 to 265 Wh/kg.  Larger batteries tend to be less energy dense, as the larger the battery array is, the more weight is taken up by non-energy storing components like compartmentalization and wiring.  Batteries in current Tesla cars are about 165 Wh/kg.  This works out to be about 0.6 Megajoules per kilogram (MJ/kg).

Hydrocarbon (aka "fossil fuels")

Hydrocarbons of all kinds, whether derived from fossil fuels or not, are highly efficient at storing energy.  The range for hydrocarbons goes from the least dense, which is wood at 16 MJ/kg, to natural gas which is all the way at 55 MJ/kg.  Standard gasoline is pretty high at about 46 MJ/kg.

Nuclear

Nuclear fuels are not in the same league in terms of raw energy density.  Uranium-235 has an energy density of 3,900,000 MJ/kg.

These values immediately prompt certain questions:

-How are electric cars able to be even a little bit competitive with gas-powered cars?

Gasoline is about 77 times as energy dense as Li-Ion batteries.  How is it that electric cars are even able to compete with that kind of electrical density?  The glib answer is, "they're not, really".  And I think it is true that electric cars are still only barely commercially viable, if not for the "green cred" that they offer.  It is still rarely the case that if someone needs to buy a car and considers purely normal economic considerations that the rational choice is an all electric car.  But, electric cars have been getting better and better and the choice is not quite so lop-sided as it once was.  Here are some reasons the energy density issue is not as fatal for electric cars, at least of the ordinary passenger variety:

  1. The high energy density for gasoline is for *heat* energy.  This energy must be converted to motion by the internal combustion engine (ICE), and that process is constrained by thermodynamic efficiency limitations.  In particular, it is theoretically impossible for the ordinary ICE to get more than about 37% efficiency in converting the heat energy of exploding gas to motion.  This is a hard-and-fast limit imposed by physics (Carnot Theorem).  In *practice* the efficiency is lower still, so the actual energy extracted for useful motion is about 20% or so for the average car.

    In contrast, electric motors are one of the most efficient devices we have for converting energy to motion, and Tesla's operate at about 90% energy efficiency.  When you factor in the difference in energy conversion, the advantage of the gasoline powered ICE drops to just 17 times as energy dense as Tesla's Li-Ion batteries.
  2. There is an additional crucial difference in how much weight of machinery is required to convert gasoline to motion.  Gasoline powered cars must carry around with them a massive steel engine in order to convert gas to motion, a fairly bulky fuel injection system, and a bulky exhaust / noise muffling system.  Furthermore, because the power produced by an ICE is fairly low compared with the speed people want to drive them, a massive steel transmission system is also required in order to deliver the power at a wide range of speed without stalling out.  All of these things together make up a large proportion of the total weight of the vehicle.

    In contrast, electric vehicles are mechanically much simpler than ICEs.  Tesla's system, for example, has a total of 17 moving parts, compared with about 200 for a normal ICE.  Electric motors are also naturally very "torque-y", able to generate large ranges of speeds without the need for a transmission with complex gearing.  The motors are also tiny compared to an ICE, and there is no need for fuel injection or coolant or exhaust.

    Therefore, the batteries in an electric vehicle are not just replacing the gas tank; they are effectively also replacing most of the internal machinery that is under the hood of a car.  This considerably boosts the effective energy density of an electric vehicle compared to a gas powered car.  On a typical gas powered car, the combined weight of the engine and transmission is close to 10x the weight of a full gas tank.  This therefore drops the energy density advantage of an ICE care to merely 1.7 times that of a Li-Ion powered car.
This is why it is possible to have electric cars with comparable ranges to gas vehicles nowadays.  It's still a bit of a stretch, but it is possible.

However, there is an important point to realize here!  The fact that electric cars are able to play in the same ballpark as gas powered cars here is very much due to the relative weight of the fuel tank to the engine and transmission components.  This is a factor that is specific to the size and design of the typical passenger sedan.  Therefore this relatively even playing ground does not necessarily apply to other vehicle types.  Airplanes, in particular, are much more sensitive to the amount of fuel they carry--hence electricity powered airplanes are nowhere on the horizon of being practical.  I know less about railroads, but given that the weight of the engine on a train is insignificant compared to the loads that are hauled, I doubt that electrically driven trains are viable either.  Likewise I have a lot of doubt about electrically powered water freight.

Bottom line: complete electrification of transportation is probably not feasible in the near future, even if we're able to transition all personal passenger cars to electric.

-How are non-nuclear power plants able to compete with nuclear power plants, given the absurd energy density of Uranium?

The answer to this is more complex and I will get into this later when I talk about the economics of nuclear power.  For now, the simple answer is that non-nuclear power plants can not compete with nuclear--in terms of fuel cost (well, aside from solar and wind in which the "fuel" is free).  Uranium is very costly to mine, but in actuality this cost is only a negligible part of the total cost of operating a nuclear power plant.  It turns out that the majority of the cost of running a nuclear power plant is the interest that you pay on the loans to build the plant.  More on this later, though.



Key Points You Need to Know When Talking About Energy: Alternating Current and Grid Instability

In this installment, I'll discuss a major element of grid instability that is not commonly understood, which is frequency instability.  Frequency is a much more critical aspect of the electric grid than most people realize.  To understand why, you need to understand roughly how AC power is transmitted.

AC Power transmission: No electricity is produced

One aspect in which the standard "water analogy" of electricity fails is that by this analogy, one would expect that power plants pump electrons into transmission wires, which then flow down to substations, are split up and then pour into homes when needed.  In fact, however, AC power generation results in a net-zero motion of electrons.  The alternating current does move electrons back and forth a bit, but there is no round trip of electrons moving around a circuit.  This is what the "alternating" part of "Alternating Current" means: charge flows one way very briefly (1/120th of a second in North America), but then flows right back the other direction, cancelling out the previous flow of charge.  The total electrical charge delivered from all power stations to the grid over time is exactly zero.

If power stations don't deliver electricity to the grid, how do they deliver power?  The answer is that they deliver cyclical changes to the electrical potential of the grid components.  You can think of this as a constant back-and-forth motion in the electrical field that can be harnessed by any electrical tool that keys off of this cyclical motion.  Incandescent light bulbs, for example, run simply off of resistance, which is electrical friction.  Just as when you are attempting to start a fire by rubbing two sticks together, the heat that is generated doesn't care what direction the sticks are going, so too you can hook an incandescent light bulb straight up to either an AC power source or a DC power source and it will work exactly the same way.  On the other hand, things like computers and cell phones rely on actual directional electricity flow because they make use of electrical logic gates, which do very much care about the direction of energy flow.  This is why such devices need the AC adaptor--that bulky "wall wart" to which the charge cord going into the phone or laptop; this is the electrical equivalent of a "worm gear," changing cyclical energy into linear energy.

My current favorite explanation of this phenomenon is this YouTube video by Veritasium: The Big Misconception About Electricity.  His analogy of AC power transmission as being a chain inside a tube being pushed back and forth is an excellent way of understanding things on a basic level (though he himself qualifies this analogy pretty severely in the above video.)  It's a cool video and I recommend you watch the whole thing, but for efficiency's sake I have linked to the precise moment in the video where he begins to talk about AC power transmission, and you only need to watch about 2 minutes of the video from that point.  (The details about wire transmission versus electrical field transmission are interesting from a physics perspective, but not really relevant to energy policy.)

AC Power transmission: No electricity is stored

To the above point, I will add also the fact that there is, currently, no real utility scale storage in the power grid.  This is no longer *strictly* true, as there are (as mentioned before) now "utility scale" battery storage plants in the world which do store energy and release it to the grid.  However, all such power storage plants combined have a negligible impact on the grid.  They currently amount to less than a rounding error in the total energy scheme of things.

One interesting consequence of this is that as you look at anything around you that is consuming electrical energy, you can know that (since electrical power travels over the grid at about the speed of light) the power that is lighting that lightbulb or driving that computer monitor was--less than a millisecond ago!--a scalding hot bit of steam pushing a steam turbine, or a photon hitting a solar cell, or a puff of wind pushing a wind turbine.  Electricity delivery from power plant to home happens quasi-immediately--that is, at the speed of light.

Frequency synchronization

So AC power transmission happens (basically) immediately and also cyclically; in North America, the back-and-forth of alternating current on the grid happens at 60 Hertz (cycles per second).  These two facts together means that all power plants putting energy onto the grid must be synchronized.  Every input into the grid *must* be at a frequency matched with the grid frequency.  As the electrical fields are going back and forth on the grid, if some power source attempted to add energy into the system but was pushing while everyone else was pulling, it would instead *remove* energy from the system: it would cancel out instead of adding.

Actually, "cancelling out" is not a good description of what would happen in such a case, since the actual result would be much more violent.  If you have ever driven a stick shift, you have probably at least once accidentally put the vehicle into reverse when you meant to put it into a forward gear.  Remember the gear grinding?  That's getting closer to what would happen if a power plant tried to put electricity onto a grid with the wrong frequency.  Except, that's not a violent enough image for what would happen.  There are *massive* amounts of energy involved here.  Let me come up with a better analogy:

Suppose you were to take a lawn mower, turn it on, and then keep it running while upside down.  Then if  you took another lawn mower, turn it on and put it on top of that lawn mower--imagine the wreck and metallic carnage that would ensue. 

Then imagine doing that, but instead of using lawn mowers, use two of those massive turbines that are in power plants:


Two turbines connected to the same grid but operating at different frequencies would result in enormous destruction.  Massive, expensive pieces of machinery connected to one or the other or both would tear themselves apart due to the conflicting magnetic forces that situation would create.  Explosions, sparks flying, massive crank shafts breaking apart . . . well, it would be bad, let's say that much.

This is why all utility scale equipment have trip-safeties built in.  We are familiar with *current* circuit breakers, which break the circuit in case too much electricity is flowing because of overload or a short circuit.  Utility equipment, on the other hand, have *frequency* circuit breakers.  In the event that a generator detects that it is producing electricity at a frequency too far off the grid frequency, the equipment will automatically trip, removing itself from the grid.

The relationship between frequency and power

There are a number of issues that can affect the frequency of a power plant, the most usual one being excessive load.  This is something that bicyclists will be familiar with, actually.  If you are cycling along a flat road, pedaling at a specific rate, but then hit an uphill slope, you will find it very difficult to maintain the same speed.  Normally, the rate at which you push the pedals around will naturally slow down.  The same thing happens to power plants when they experience higher than normal load: frequency goes down as power output is too low to meet demand.

While this is good from the standpoint of equipment safety, it does create an inherent danger in the grid of cascading failures.  Suppose, as was the case last year in Texas, that you have a systemic problem affecting power output in many power plants across the grid.  Supply cannot quite keep up with demand.  Consequently, some power plants start to have trouble keeping up with the required grid frequency.  Once some of these plants start removing themselves from the grid due to frequency issues, however, this puts even more burden on the remaining plants.  What can happen, then is a cascading failure where, all of a sudden, all power plants across the grid trip, one after another.

Most people are unaware of how close to this total failure Texas came last year.  (Here's a decent video talking about this: What Really Happened During the Texas Power Grid Outage?).  Texas was 4 minute 37 seconds away from triggering this sort of cascading failure.  If power managers had not managed demand by "shedding load" (i.e., turning off power to large segments of the grid), the entire grid in Texas would have gone black.

And "going black" is worse than you probably realize, again because of synchronization.  Turning power plants back on and getting a grid back up to fully operational is a massive task, because every power plant must be carefully brought back online *in sychronization* with every other power plant.  Such a "cold black start" in Texas has never happened, but is projected to take weeks or months to finish, during which time almost the entire State of Texas would have been without electricity.

Implications for the energy debates

Let's draw a few implications for energy policy from the above discussion:

  1. Variable power sources have inherent grid stability problems.  Because of the relationship between power balance and frequency, power sources which dramatically change their power output are inherently tricky to manage on the grid.  It's not just a question of whether you have the total raw power at any moment to keep up with your demand; you have to do this *and* at the same time keep all of your separate power sources properly magnetically coupled, lest everything come crashing down.  The greater the extent your grid relies on such fluctuating power sources, the more of a challenge this becomes.
  2. Some people have criticized Texas for not connecting to the wider grid, evidently thinking that the more power plants are linked to a system, the more secure the system will be.  That is not necessarily the case.  More power plants magnetically coupled does mean more total power, but it *also* means more plants that must be precisely aligned with each other.  It's been shown that more interconnections (beyond a certain limit) will tend to *de*-stabilize the grid, not increase reliability.

    This is why for some of the largest interconnections between grid areas, you will see high-voltage DC powerlines connecting grid to grid.  You convert from AC to DC at one end, pipe the electricity to the other end, and then convert back to AC.  This provides a power pipeline from one grid to the other without entangling the two systems with the same frequency requirements.

    However, such high-voltage DC power lines are a modern, specialized, high-tech, and extremely expensive solution.  Most such power lines are rather short and limited to high-density areas, because they are so expensive to create.  The most obvious consequence of this is that any infrastructure bill that doles out money to individual regions and tells them "improve your grid infrastructure" is not going to result in such power lines.  These things are always created as specific projects in order to connect grid-to-grid: no single region is going to be able to justify high voltage DC power lines *for the purpose of that region alone*.
  3. Frequency instability is a major reason why the supposed "Smart Grid" is still very much a pie-in-the-sky idea.  The idea that a power grid can be made stable and usable even with widely varying power inputs by having all of the nodes in the grid be intelligently switchable is nice--but completely beyond any existing grid right now.  The extent of "Smart Grid" technology available right now is really all about dynamic load shedding--smart meters that turn down your AC on a hot summer day because the grid load is getting too high.  The ability of a Smart Grid to dynamically handle variable *power plant* loads is only possible via the addition of massive, currently non-existent machinery: intelligently connected synchronizing relays, or something to that effect.

    Again, a key takeaway here is that no amount of *normal* funding from an infrastructure bill which is aimed at "repairing our crumbling infrastructure" (or whatever the rhetoric is) will allow for this sort of transformation of the power grid into something that can handle huge amounts of variable power supply.  What would be needed for such a thing is a radical transformation, not just a repair.