The LNT is not inconsistent with the data once you eliminate all data showing that the dose-response relationship is in fact supralinear. https://danielcorcos.substack.com/p/bca
Seems like I'm not the only one around here who is being repetitious. I would only point out that, if the acute dose-response curve is sub-linear, the shorter the repair period the better. If it is linear, the repair period is irrelevant. If it is supra-linear, the shorter the repair period, the more harm to the organism. But I guess that's OK if like Gofman, you deny the existence of radiation damage repair.
The fundamental point here is we must stop this myopic focus on single shot, acute doses, and look at the dose rate profile through time. When you combine this viewpoint with repair neither linear nor supra-linear make any sense.
I do not deny the existence of damage repair. However, the dose-response curve only starts to rise once the repair mechanisms have been overwhelmed, which means at radiation levels where DNA damage becomes irreversible. This implies a threshold, but at a much lower level than that of radiological examinations. The dose-response curve should not be the result of imagination but of observations, with a number of cancers that has been MEASURED for each dose. This is what Gofman attempted to do. It is also what I did for mammographies, where I found a rate of 4 cancers per 1000 screenings (corresponding to 2 + 2 mGy).
“ It is also what I did for mammographies, where I found a rate of 4 cancers per 1000 screenings” how were you able to determine that the cancers were a result of the screening?
Second, I was able to show that most of the excess occurs 8 to 10 years after the mammograms ( https://www.biorxiv.org/content/10.1101/238527v1.full ) and from this to calculate the number of cancer per mammogram. We confirmed the results in all the countries that we studied (for instance, Corcos & Bleyer, NEJM, 2020).
I pay little attention to studies that try to dissect the impact of tiny doses. The confounding factors: screening bias, selection bias, all sorts of environmental factors overwhelm the signal. The result is he said/she said. Study A says radiation is bad, Study B says it isnt.
Table 1 focuses on people who have received hundreds of mSv and up. It is at Table 1 doses and dose rates that we can clearly see the impact of radiation.
The problem is that what we are talking about is the impact of tiny doses. And read the studies, see the presentations: there is no bias. These are before/after studies at the population level all showing the same results: an excess of cancers occurring 10 years after screening is implemented in various places and at various times. And that's exactly what I was saying, it's only if you discard all the information showing that the dose-response is supralinear that you can conclude otherwise.
Ok, I read a bit of the first study. The summary is well supported by the rest of the data. "Conclusions and Relevance: When analyzed at the county level, the clearest result of mammography screening is the diagnosis of additional small cancers. Furthermore, there is no concomitant decline in the detection of larger cancers, which might explain the absence of any significant difference in the overall rate of death from the disease. Together, these findings suggest widespread overdiagnosis."
So this does not show excess cancers from the radiological effects of mamography. It shows that existing small cancers are detected more often. Mortality rates remain the same.
Your article is working to prove your hypothesis that x-ray screening causes excess breast cancers rather than over diagnoses. In your figure 3 I see that the number of BC for the 65 to 69 age group rose from 300 to 375 after screening was implemented. But that the period before screening was implemented it rose from 250 to 300. The trend curve of the increase had already started rising before screening. It seems there is some other environmental effect that corresponds to the same time period as the screening but starting a few years before the screening. The increase in the number of cancers in your chart is much higher than the number of cancers you claim are caused by screening. 75 vs 4. I agree with Jack's assessment below. The confounding factors overwhelm the signal.
What you need to see is Figure 2. It shows that there was a >40% increase in breast cancer incidence in the 60-64 age group with the same screening intensity between 1996 and 2005. The fact that the incidence of breast cancer began to increase in the 65-69 age group before the introduction of screening after age 65 is due to the fact that these women had mammograms between 50 and 64 years old.
You cannot use the Sv to do a dose-response curve. The dose should be in Gy. Using Sv would mean that you already know the response. The effect depends on many things, including the type of tissue, and should be measured rather than modeled.
We have a great deal of data to telll us how to convert Gy to Sv depending on the particle and its energy. In both the mammogram and Fukushima cases,
teh conversion is rather simple. The factor is 1.0. So feel free to substitute
Gy for Sv in the above comment.
But what you are really saying is your data and your (unspecified) model cannot be used in nuclear power plant release scenarios, which is what this substack is about. Not sure what you are adding to this conversation.
You have a great deal of data to tell you what the risk of cancer with 1 mGy for each organ? So, show me that data.
"The sievert is a unit intended to represent the stochastic health risk of ionizing radiation, which is defined as the probability of causing radiation-induced cancer and genetic damage."
I think you are telling me I cant use the ICRP RBE recommendations or the ICRP organ weighting factors. So I cant model the harm associated with a given release. I have no way of coming up with reasonable radiation regulation. The counsel of despair.
In fact, we have an immense amount of data on populations that have received large doses at both high and low dose rates. Much of this is summarized in Table 1. We can come up with a model that matches the response, and it certainly is not LNT, nor a fortiori is it supra-linear. End of story.
Nothing like an unfalsifiable argument that changes over time. That is what passes for “science” now in almost every field.
The NRC already have the answer, and their jobs depend on that answer. Therefore they can point to any tiny data gap in your model even though theirs is wholly inconsistent from the start.
Thank you for this excellent piece. I have one question. You don’t mention the possibility of health benefits from radiation hormesis which seem to be backed up by a significant amount of evidence. I was wondering why: (1) You don’t believe in it, (2) you don’t have an opinion, (3) you believe in it but would rather not go into the debate for tactical reasons, or (4) others ?
(3) Hormesis is real. We can see it clearly when a quick priming dose is followed by a much larger challenge dose. But does it apply to a nuclear power plant release which is usually a step jump in dose rate followed by a slow decline? I have never seen any compelling evidence that it does. And if you cant sell me, you are not going to sell the public. A-release-is-good-for-you is a tough argument to make. And it's unnecessary. All we need is a dose-response curve whose slope goes to zero at zero dose. In my view, bringing hormesis into the discussion is counter-productive.
The way to go after LNT is to focus on its nonsensical claim that exposure period is irrelevant.
Hormesis was bleeding obvious in the decades-long Life Study Survey of 200,000 hibakusha, survivors of the Hiroshima and Nagasaki blasts, and more than 3 million mice in the six-decade "Mouse House" studies at Oak Ridge National Laboratory.
Mike Conley and Tim Maloney will soon release a book entitled "The LNT Story" that goes into even more detail.
Related: Trade-Off Denial
https://www.mattball.org/2023/07/trade-off-denial.html
The LNT is not inconsistent with the data once you eliminate all data showing that the dose-response relationship is in fact supralinear. https://danielcorcos.substack.com/p/bca
Daniel,
Seems like I'm not the only one around here who is being repetitious. I would only point out that, if the acute dose-response curve is sub-linear, the shorter the repair period the better. If it is linear, the repair period is irrelevant. If it is supra-linear, the shorter the repair period, the more harm to the organism. But I guess that's OK if like Gofman, you deny the existence of radiation damage repair.
The fundamental point here is we must stop this myopic focus on single shot, acute doses, and look at the dose rate profile through time. When you combine this viewpoint with repair neither linear nor supra-linear make any sense.
I do not deny the existence of damage repair. However, the dose-response curve only starts to rise once the repair mechanisms have been overwhelmed, which means at radiation levels where DNA damage becomes irreversible. This implies a threshold, but at a much lower level than that of radiological examinations. The dose-response curve should not be the result of imagination but of observations, with a number of cancers that has been MEASURED for each dose. This is what Gofman attempted to do. It is also what I did for mammographies, where I found a rate of 4 cancers per 1000 screenings (corresponding to 2 + 2 mGy).
What is important here is the dose rate. At low dose rate (natural background), DNA damage is repaired.
“ It is also what I did for mammographies, where I found a rate of 4 cancers per 1000 screenings” how were you able to determine that the cancers were a result of the screening?
First, there is a universal association between mammography screening and excess breast cancers (see for instance here https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2363025).
Second, I was able to show that most of the excess occurs 8 to 10 years after the mammograms ( https://www.biorxiv.org/content/10.1101/238527v1.full ) and from this to calculate the number of cancer per mammogram. We confirmed the results in all the countries that we studied (for instance, Corcos & Bleyer, NEJM, 2020).
A summary of the results here: https://youtu.be/f9lG96pydZ4
https://twitter.com/daniel_corcos/status/1671062209143644162
I pay little attention to studies that try to dissect the impact of tiny doses. The confounding factors: screening bias, selection bias, all sorts of environmental factors overwhelm the signal. The result is he said/she said. Study A says radiation is bad, Study B says it isnt.
Table 1 focuses on people who have received hundreds of mSv and up. It is at Table 1 doses and dose rates that we can clearly see the impact of radiation.
The problem is that what we are talking about is the impact of tiny doses. And read the studies, see the presentations: there is no bias. These are before/after studies at the population level all showing the same results: an excess of cancers occurring 10 years after screening is implemented in various places and at various times. And that's exactly what I was saying, it's only if you discard all the information showing that the dose-response is supralinear that you can conclude otherwise.
Ok, I read a bit of the first study. The summary is well supported by the rest of the data. "Conclusions and Relevance: When analyzed at the county level, the clearest result of mammography screening is the diagnosis of additional small cancers. Furthermore, there is no concomitant decline in the detection of larger cancers, which might explain the absence of any significant difference in the overall rate of death from the disease. Together, these findings suggest widespread overdiagnosis."
So this does not show excess cancers from the radiological effects of mamography. It shows that existing small cancers are detected more often. Mortality rates remain the same.
Your article is working to prove your hypothesis that x-ray screening causes excess breast cancers rather than over diagnoses. In your figure 3 I see that the number of BC for the 65 to 69 age group rose from 300 to 375 after screening was implemented. But that the period before screening was implemented it rose from 250 to 300. The trend curve of the increase had already started rising before screening. It seems there is some other environmental effect that corresponds to the same time period as the screening but starting a few years before the screening. The increase in the number of cancers in your chart is much higher than the number of cancers you claim are caused by screening. 75 vs 4. I agree with Jack's assessment below. The confounding factors overwhelm the signal.
What you need to see is Figure 2. It shows that there was a >40% increase in breast cancer incidence in the 60-64 age group with the same screening intensity between 1996 and 2005. The fact that the incidence of breast cancer began to increase in the 65-69 age group before the introduction of screening after age 65 is due to the fact that these women had mammograms between 50 and 64 years old.
Interesting. Does low threshold supra linear response theory predict any cancers from living in the Fukushima exclusion zone?
I understand a study found virtually all of the evacuation at the time to be net harmful which I assume was based on LNT
It is difficult to predict the effect of chronic exposure to a radioactive molecule from data on acute X-ray exposure.
A full mammogram is a dose of about 0.5 mSv.
I think Daniel is claiming an excess risk of 4.0e-3
from an acute dose of 0.5 mSv.
The LNT excess risk from 0.5 mSv is 2.4e-5.
SNT says 4.7e-8 from 0.5 mSv acute,
although the SNT excess risk depends on the background dose,
the higher the base dose the higher the excess risk.
Supra-linear excess risk also depends on the base dose,
the higher the base dose the lower the excess risk.
Daniel has conceded radiation repair,
but he has not told us his repair period
nor his dose-response curve.
For the sake of argument,
let's assume he accepts a repair period of a day
and his curve goes as the square root of the acute dose
(Daniel, feel free to fill in with your own numbers.)
Then to match his 0.5 mSv risk number,
the model is risk = 0.00566 * d**0.5.
The Japanese are trying to remediate Fukushima
to reduce the additional dose everywhere to 1 mSv/y or 0.00274 mSv/d.
So according to my square-root model,
the excess risk is 0.000296/day or 0.108 per year.
The area around Fukushima is extremely dangerous according to this model.
Unfortunately everywhere else is too.
It looks like we are all doomed.
You cannot use the Sv to do a dose-response curve. The dose should be in Gy. Using Sv would mean that you already know the response. The effect depends on many things, including the type of tissue, and should be measured rather than modeled.
Daniel,
This is a a silly deflection, but nice try.
We have a great deal of data to telll us how to convert Gy to Sv depending on the particle and its energy. In both the mammogram and Fukushima cases,
teh conversion is rather simple. The factor is 1.0. So feel free to substitute
Gy for Sv in the above comment.
But what you are really saying is your data and your (unspecified) model cannot be used in nuclear power plant release scenarios, which is what this substack is about. Not sure what you are adding to this conversation.
You have a great deal of data to tell you what the risk of cancer with 1 mGy for each organ? So, show me that data.
"The sievert is a unit intended to represent the stochastic health risk of ionizing radiation, which is defined as the probability of causing radiation-induced cancer and genetic damage."
I think you are telling me I cant use the ICRP RBE recommendations or the ICRP organ weighting factors. So I cant model the harm associated with a given release. I have no way of coming up with reasonable radiation regulation. The counsel of despair.
In fact, we have an immense amount of data on populations that have received large doses at both high and low dose rates. Much of this is summarized in Table 1. We can come up with a model that matches the response, and it certainly is not LNT, nor a fortiori is it supra-linear. End of story.
Bingo, Jack. The 21st century seems to be a kind of Twilight Zone in this regard.
Especially re: energy/environmental/economic policy.....
Nothing like an unfalsifiable argument that changes over time. That is what passes for “science” now in almost every field.
The NRC already have the answer, and their jobs depend on that answer. Therefore they can point to any tiny data gap in your model even though theirs is wholly inconsistent from the start.
Thank you for this excellent piece. I have one question. You don’t mention the possibility of health benefits from radiation hormesis which seem to be backed up by a significant amount of evidence. I was wondering why: (1) You don’t believe in it, (2) you don’t have an opinion, (3) you believe in it but would rather not go into the debate for tactical reasons, or (4) others ?
Hyper,
(3) Hormesis is real. We can see it clearly when a quick priming dose is followed by a much larger challenge dose. But does it apply to a nuclear power plant release which is usually a step jump in dose rate followed by a slow decline? I have never seen any compelling evidence that it does. And if you cant sell me, you are not going to sell the public. A-release-is-good-for-you is a tough argument to make. And it's unnecessary. All we need is a dose-response curve whose slope goes to zero at zero dose. In my view, bringing hormesis into the discussion is counter-productive.
The way to go after LNT is to focus on its nonsensical claim that exposure period is irrelevant.
Do not make it a debate about hormesis.
Jack is right that you don't need hormesis to refute LNT, but once it's measured, LNT becomes even more absurd.
Concerning hormesis, and the denial of it that underpins LNT, watch the interview with Edward Calabrese, a PhD toxicologist, at
http://hps.org/hpspublications/historylnt/episodeguide.html
Hormesis was bleeding obvious in the decades-long Life Study Survey of 200,000 hibakusha, survivors of the Hiroshima and Nagasaki blasts, and more than 3 million mice in the six-decade "Mouse House" studies at Oak Ridge National Laboratory.
Mike Conley and Tim Maloney will soon release a book entitled "The LNT Story" that goes into even more detail.