Saturday, March 12, 2022

Key Points You Need to Know When Talking About Energy: Half-Life, Linear No-Threshold

In this installment, I want to discuss some aspects of nuclear pollution that are misunderstood.  There are two topics in particular that need to be understood to be able to properly evaluate the severity of a particular release of radioactive material.


Half-life

Definition

As I said earlier, nuclear fission happens at the atomic level because certain configurations of nuclei are more unstable than others and naturally break apart over time.  This is a random process at the atomic level, but is governed by a strong law of averages, so that once you get up to the macro level of particles that make any impact on us, the rate of decay is a well known constant depending on the material.

Half-life is the time after which any given particle of a certain isotope is 50% likely to undergo spontaneous fission.  It's called half-life because if you start with a certain amount of a radioactive isotope, after this time half of it will be left, the other half having decayed into other elements.

Because the radiation emitted by a particular isotope is caused by this same decay, the half-life of a radioactive material is also the half-life of the radiation it emits.  A radioactive material will therefore become less radioactive over time as it decays.  The formula for figuring out how much less radiation will be emitted by an isotope after a certain amount of time is fairly simple.  First, calculate how many half-life intervals will have passed in that interval, then take that number and raise the fraction 1/2 to that power.  For example, if some material has a half-life of 1 year and 3 years has elapsed, then the total radiation emitted by that material with be (1/2)^3, or 1/8th of the initial radiation.

A practical example

In more practical terms, you might want to know how long it will be before radiation of a certain material will fall to safe levels.  The Greek letter lambda, λ, is the symbol for half-life.  If T is the total time elapsed, then the formula would be:

Safe Level = Initial Level * (0.5)^(T/λ)

Solving for the total time yields this formula:

Safe Level / Initial Level = (0.5)^(T/λ)

Log0.5(Safe Level / Initial Level) = T/λ

T = (Log0.5(Safe Level / Initial level)) * λ

The safety implication of this is that the length of time a particular isotope is problematic depends greatly on the half-life of this material--it is directly proportional.  And the half-life of different materials emitted by a nuclear incident varies *incredibly*.  Let's illustrate this with some real-world examples.

The Three Mile Island incident emitted most of its radioactive material in the form of radioactive Xenon, to the tune of something like 14 μSv (that's "micro Sievert") per square meter over a large area (estimated to affect about 2 million people).  The average daily dose of radiation the ordinary person gets just from regular background radiation is about 8.5 μSv, so this was definitely a slightly higher level of radiation than is normal.  For how long, though, were those people exposed to higher levels of radiation?

Let's assume for the sake of argument that the 14 μSv figure was the daily exposure (it wasn't, by the way, but let's go with that for now).  Let's say we wanted to know how long it took that 14 μSv to drop down to 0.1 μSv--this would make the level of increased radiation insignificant compared to average daily radiation.  Radioactive Xenon has a half life of about 12 days, so plugging this into the formula, we would get:

T = Log0.5(0.1 / 14) * 12

T = 85.5

This means that in 85.5 days, the radiation levels from Xenon released by the accident would be below a level that would cause us concern.

At Chernobyl, on the other hand, Caesium-137 was released in great quantities.  Caesium-137 has a half-life of 30 years.  If Three Mile Island had released the same amount of radioactive material, but in the form of Caesium-137 instead, the time needed for the same decrease in radiation levels would have been about 2600 days, not 85--so, 7 years instead of two and a half months.

Reminder about material types

This would be a good time to recollect that radiation impinging upon the body from outside doesn't do anything near the harm that radiation emitted by materials absorbed into your body.  Therefore the level of danger presented by a particular radioactive material is determined by the amount of radiation it emits, the half-life, and how and to what extent that material is absorbed by the body.

As a practical example, at Chernobyl, three elements of particular concern were released:
  • Iodine-131.  It has a short half-life (only 8 days), but it collects in the thyroid when absorbed by the body and is not easily removed.  It can do permanent damage to non-regenerating tissue of the thyroid gland.
  • Strontium-90 has a long half-life (29 years), and can lead to leukemia in high doses.
  • Caesium-137 has a half-life of 30 years, and can harm the liver and spleen.
And to apply the half-life principles again: it's been 36 years since the accident, meaning that the amount of Iodine-131 emitted by the accident has halved 1644 times in the interim.  This means it has completely vanished--1.27 x 10^-495 is the actual multiplier, which is close enough to zero as makes no difference, since this number would imply that far less than a single atom of Iodine-131 is still left. 

The levels of Strontium and Caesium radiation, on the other hand, will have decreased only to about half of what they were on the day of the accident: a welcome decrease, but not nearly enough so that we can stop worrying about it.

The Linear No-Threshold Model (LNT)

The problem of evaluating "widely dispersed but thinly spread" harm.

Given that some radioactive waste has a long half-life, we have to be concerned that it can spread around the world and impact a large number of people before it decays into safety.  For example, a lot of people were concerned about the amount of Caesium-137 that was released into the open ocean from the Fukushima incident.

As material disperses, however, it also thins out considerably.  So we have to be concerned about many people getting tiny amounts of exposure to radioactive material.  How big of a problem is this?

The honest scientific answer is, surprisingly, we don't really know.  This is one of those areas in which laymen often are surprised at the lack of definitive answers coming from the scientific community, because it seems like a simple and obvious question and it doesn't seem as if the answer should be too difficult to find out.  But this is simply not the case.

Why don't we know?


This lack of knowledge is perhaps not so surprising if you think through the practicalities of how science works.  Scientific knowledge is primarily advanced through experimentation.  If you have a question you want answered, you design an experiment that replicates the conditions you want answers about, plus a control that is just like your experiment but without those conditions, all with the goal of comparing the results against the control and seeing what the test conditions did.

This fundamentally makes coming to a scientific understanding of how harmful certain things are to humans very difficult to do, because you are not ethically allowed to perform an experiment in which you expect any harm to come to your test subjects.

You can try to do experiments on lab animals, but it is never a safe bet to extrapolate how things affect a lab rat onto how those same things will affect humans.  Experiments on lab animals can only be a preliminary for experiments on humans--this we know from long experience.  The saying among experimental health science folks is that mice lie and monkeys exaggerate.

So what do scientists do, when trying to ascertain the health risks of low-dose radiation?  This is where the term "Linear No-Threshold" (abbreviated LNT) comes in, and it's controversial.

How we fudge an answer anyway.


What has been done is to take known instances of radiation exposure and put them on a graph, with the amount of exposure on the horizontal axis and the imputed harm (in terms of likelihood of death) on the vertical axis.  Since scientists can't ethically create these circumstances, we have to rely on outcomes of known nuclear accidents for the data.  This is a pretty limited set of events; you can get a pretty complete summary of them all here: Nuclear and Radiation Accidents and Incidents.

Then, we plot the outcomes of these events for the people exposed, based on how much radiation they got.  You normalize deaths with severe illness in some fashion, such as, you try to guess how many years were taken off the total likely lifespan of someone who got cancer and died some years after exposure and then convert that to some fraction of a death.  This is not an exact science!  There are several places where you need to insert some common sense rules-of-thumb.

Then, after you have put all these instances of known exposure levels and outcomes on a graph, you draw the best straight line through these data points that terminates at the "zero exposure, zero danger" point at the origin.  It ends up looking something like this:

The straight-line nature of the projection is why this model for radiation harm is called "Linear", and the fact that the projected harm only goes to zero when radiation goes to zero is the "No-Threshold" part.

This model is quite controversial, and in fact, almost certainly wrong.  I don't think anyone serious believes that this model accurately conveys the actual amount of harm that low levels of radiation causes.  From what I've heard, even people who champion this model of guessing at the harm caused by low levels of excess radiation invoke the "precautionary principle" in order to do so, meaning that they think we should assume the maximum possible harm coming from some situation if it is an unknown.

There are multiple academic papers out there arguing against the LNT model, one of which I will link to here: It Is Time to Move Beyond the Linear No-Threshold Theory for Low-Dose Radiation Protection.  I am going to add on some of my own reasoning against the LNT model here:

  1. Nothing in nature that we know of acts in this way.  For every dangerous material that we know of, there is always *some* threshold at which it becomes harmless.  Arsenic, for example, is a very deadly poison.  It is also present in every single glass of water you drink, without exception--in trace amounts. The saying in the medical world is, "the dose makes the poison".  A low enough dosage doesn't mean "just a little bit poisonous", it means "not poisonous at all".

    Indeed, there are all sorts of things which, if graphed on such a chart as the one above, would be roughly U-shaped.  Vitamin D, for example, is poisonous at high doses, and in high enough doses can kill you pretty quickly.  But at a certain level, it becomes actually beneficial for the human body, meaning on a chart such as the one above, it would curve below zero on the "harm" scale.  Then if Vitamin D levels get too low, the fact that you are missing out on Vitamin D would curve the "harm" back up into the positive range. 

    U-shaped curves are much more common in nature when it comes to the right amount of something to have.  Consequently, for very low levels of radiation, it is more likely that the harm produced is either literally zero or else actually negative.

  2. The LNT is abused by people to exaggerate the impact of nuclear accidents.  I have seen this done with Chernobyl, Three Mile Island, and Fukushima.  If you oppose nuclear energy and you want to exaggerate the negative impacts of these accidents, you can take advantage of the fact that modern radiation detection is incredibly sensitive.  We can detect trace radiation from even vanishingly small particles of matter, even down to the individual atoms.

    Consequently, it is a certainty that some amount of technically detectable radioactive material from at least Chernobyl and Fukushima (I'm not sure about Three Mile Island given the lower half-life of released materials) have gotten into every human on the planet.  The wind constantly blows and the seas constantly move, so eventually these things find their way literally everywhere on the planet.

    What some people have done, therefore, is take that extremely low level of radiation from the world-wide dispersion of these events, then look up the projected harm from the LNT graph of radiation effects.  This will be a very low number, but they will then multiply it by 6 billion people in order to get a total death toll from Chernobyl or Fukushima.  In both cases, if you do this, you end up with a number significantly larger than the official death tolls of either event.  These are not numbers that are justified; they wildly overstate the probable impact.

Conclusion

Not all nuclear accidents are the same.  If you want to understand the severity of a nuclear event, you need to know more details than just "there was a meltdown" or "nuclear materials were released".  You need to know what materials were released, and you need to know in what way they were dispersed.  You also need to be aware that the severity of total harm to humanity from some of these terrible accidents has been greatly exaggerated, and that the Linear No-Threshold model is largely to blame for that. 





No comments:

Post a Comment