Jump to content

Thyroid cancer around Fukushima


Enthalpy

Recommended Posts

Hello everybody and everyone...

 

A December 2014 report about thyroid cancer saw 86 cases among 298,577 young people under 18 in the Fukushima prefecture.

 

Lacking a reference report for Japanese people, I compared it with a French study

http://www.invs.sante.fr/fr/content/download/7760/52112/version/1/file/bilan_cancer_thyroide.pdfpages 31-32

that saw 224 cases among 11 million French children under 14, that would extrapolate proportionally to 26ppm up to 18 years, or 8 expected cases amont 298,577 people.

 

That discrepancy would be very significant, but

  • The criteria of cancer may differ
  • The populations differ genetically and culturally

If I compare now with a June 2013 study in the Fukushima prefecture, it saw 12 cases over 174,000 young people.

 

This seems a very significant increase. Even at 3 sigma above 12/174,000, the previous proportion then would have been 160ppm, extrapolating to 48 expected cases in the bigger December 2014 population, from which 86 cases are 5.5 sigma over ouch.

 

Unless criteria have changed in the meantime. But it's the same population, the same context.

 

I fear the early cases detected were indeed the beginning of a big peak. The argument by Japanese officials against a consequence of radioactivity was that such effects take longer to appear - but it's still coming.

 

The good side is that thyroid cancer is very well cured. Though, it is also an indicator for other cancers that are harder to attribute to radioactivity.

 

Finding the data on the Internet is difficult, as the topic is too passionate. On both sides: a report followed only two months after December 2014 just to conclude that the increase in two months wasn't significant - you guessed.

Link to comment
Share on other sites

Changes in short time frames are not usually very helpful, especially in diseases that take time to establish. One would need a much longer time frame to establish any connection. I believe screens were offered starting around 2011 and one would have to model the values until now to see any aberrations from the norm. Two point comparisons easily lead to false positives (to put it carefully).

Another aspect is also that it does not seem clear whether the same sample population was used (i.e. whether the study was independent). If there were follow-up analyses, it is obvious that the number will increase over time.

A final issue is that the there is no good reference value to look at, as the high-resolution screening was only done after the incident on only in that prefecture. Obviously the detection rate would be higher than average population estimates (especially as thyroid cancer can be quite asymptomatic). Ideally a larger data set obtained from a larger population would be needed to establish whether the values in Fukushima are really a deviation outside the norm. I.e. you would need to establish confidence intervals for the norm to identify outlying trends.

As it stands, one would need to find more data to actually figure out what is going on (or conversely, the presented data in OP does not allow any conclusions).

 

The only study I actually found is a 30-month evaluation, but I would a bit more time to actually read methodology in detail. The approach is different, though as instead of concentrating on the rare event (i.e. cancer) they looked at correlation between physiological factors such as thyroid volumes and related blood values and iodine-131 deposition at their residence (doi: 10.1371/journal.pone.0113804).

 

Looking at small effects in a population is actually extremely tricky as there are always a lot of confounding factors. And unfortunately the honest answer would be that we need more data (and possible longer time frame) and decent study design to find answers.

Edited by CharonY
Link to comment
Share on other sites

Thanks for your input!

 

Changes over very few months, as the update to the latest study attempted to see, indeed uses to be inconclusive. They got 1 definite case more and 8 probable ones over exactly the same sample; if limiting to the certain case, the variation is statistically little significant - as expected.

 

Though, if comparing the Dec 2014 study with the June 2013 one, the increase is extremely significant. Since both samples are limited to 18 years age, the number of cancers in an equilibrium situation does not need to increase. Though, the observed increase is big (69ppm to 288ppm, from extrapolated 21 expected cases to 86 observed) and statistically strong.

 

Small effects are difficult to observe, sure. For instance, the cancer increase in Corsica after Chernobyl was just a few cases more, which bad luck suffices to explain. But here we do have a number of cancers (86 cases!) that makes the measure accurate.

 

No reference for the young Japanese population before the accident, or at least this is what I read. I wouldn't like to extrapolate the French study (predicting 8 cases instead of 86 observed), which may have completely different cancer criteria.That's why I prefer to compare the two vast studies after the accident.

Link to comment
Share on other sites

The likelihood of detection over a time series depends a lot on the properties of the group (even without external input). I would need precise data on that, but assuming that the same people are screened over time, the number are expected to increase as the likelihood of an incident is cumulative for each person. If you want to assume an equilibrium due to the age bracket you would have to know the age distribution in each case. In either case, the samples are clearly not independent (as likely same individuals are being retested a year later).

 

I am not sure about the Corsica data, but to my knowledge the Chernobyl-related events were only reliably detected in a longer time series. While the effect started after around 4 years, the change was only reliably detectable when embedded in a longer time series. Note that the two numbers are not a study but just data collection. A study would have to explain the contribution of the population change withing the two time points (did people move out, were the same children, now older, gender, how many were newborn etc,).

 

One issue is of course the sampling as in Fukushima much more tests are being conducted. Ideally, the same frequency data should have existed before the incidence. Failing that a crude way or normalization would be perform similar screens in another area that has the same characteristics (minus the meltdown, obviously) and perform the same analysis. The important bit is to monitor the incidence growth over time (and again, you cannot properly estimate the distribution from two points) and then look whether there are significant differences (while controlling for potential differences in population structure).

 

It is always a dangerous thing to try to extract conclusions from very limited data set. As they are continuously collecting one could start doing a proper time-series analysis. The major limiting factor are the lack of an appropriate reference set, though.

 

Aside from that, thyroid specialist are a bit wary of the screening efforts, as they fear overdiagnosis and overtreatment. This is mostly because in many cases thyroid cancer does not progress and many people die with it (often undiagnosed) rather than from it.

Edited by CharonY
Link to comment
Share on other sites

Not necessarily. Thyroid cancer is one of those types where mortality is fairly low, even without treatment. What is recommended is that the tumors are monitored and only treated when necessary. Especially tiny nodules are recommended not to be biopsied, unless there are other factors that put the patient into a high-risk category.

Diagnosing more typically does not lead to a better prognosis in these cases. I have to say that I am not familiar with the data on young adults and how much that may affect overall outcome. But even in childhood cases of thyroid cancer extensive monitoring is typpically recommended first afaik.

Link to comment
Share on other sites

As both groups limit the age to 18, the number of detection would not increase over time in a stable situation. The probability of being affected increases over age, but the older individuals exit the observed group at 18 and young unaffected children enter the group.

 

The observed group is huge: hundreds of thousands, and the number of cancers is big as well, almost hundred, enough for rock-solid statistics.

 

Because there was no study before the catastrophy, I don't compare the situations before and after. Just the two studies after the catastrophy show an obvious, quick and alarming rise in the number of cancers. That they are already visible can only mean that the peak will be worse.

 

Around Chernobyl the statistics for thyroid cancer are clear. They show a direct correlation with the radioiodine dosis, and the number of cases made it obvious rather quickly.

Link to comment
Share on other sites

Either you are misunderstanding me or I am misunderstanding you. However, you do not have independent variables here. The same children are getting screened a year later. Here, cancer risk is cumulative. Also you would need a reference set in order to compare it to. If you looked into any population with that methodology you will see an increase over time, which may or may not reach stable values over time, depending whether children leave or enter the area and how many are born vs. leaving the set at age 18. It does not mean that these have an effect on the outcome, but any analysis at minimum would have to normalize for them.

 

You can also look at the problem as a timer series, which is probably more appropriate given the structure of the data. What you describe is an increase (or regression) from 12 to 86 which looks massive. But first of all it is only two data points (so just drawing a line through it is tricky) and you do not have a reference set. I.e. you would have to look at a different prefecture or area that has the same methodology in screening and a comparable composition of youths. Then, you would have to look at least what the yearly growth rate would be.

 

And to reiterate: in Chernobyl it took minimum of 4 years to see deviation from the norm but a bit longer until it was certain that it was significant (aside from the immediate deaths from acute doses). It was by far not immediately noticeable. The first report that I am aware of that mentioned that something was outside the norm was a short communication in Nature 1992 (i.e. six years after the incidence), but while it collected data, it was also lacking in in statistical soundness. So while it started rising the data (also the sampling methodology was a bit lacking, but it was more of a quick and dirty survey at that time) was still inconclusive. It took a while after that to put these values into statistical perspective .

 

One of the reason is depending on sampling methodology and population structure fluctuations are expected. Just a single year that is higher does not tell you much. But if the trend in a time series continues, it will start sticking out.

 

You are proposing a statistical design that is unsound, I am afraid. However, the whole time series set is being published and I am certain that people are looking into it. Does the number look high in isolation? Yes. But at this point it is barely above gut feeling and that is why sound statistical models are needed to test it. The proposed design is not useful for that, though. The trouble with time series is that you need quite a few of them. Population size determines that your data entry within a time point is not total crap, but you need more time points to figure out trends and associations.

 

In the study for which I provided the DOI so far no significant correlations were found (including thyroid issues in dependence on radiation), but I have not read the full report yet. I am currently not certain whether there are others out there, but most epidemiologists (well, those that I know in any case) agree that it is too early to make any kind of assessments with any level of certainty at this point. But the good thing is that data collection is apparently better so that we may have studies out earlier than in the Chernobyl case.

Edited by CharonY
Link to comment
Share on other sites

It is a population limited to 18 years in both samples. So without the consequences of a disaster, samples taken at two different times would show the same proportion - whether some people pertain to both samples or not.

 

This is not a matter of same population or not. In the healthy situation, that is the steady state, sick people attaining 18 exit the sample, healthy younger people enter the sample, and the observation is constant up to the statisctical fluctuations.

 

Here we see a big increase that is statistically extremely significant. It does not need an unaffected population as reference to see an increase.

Link to comment
Share on other sites

 

It is a population limited to 18 years in both samples. So without the consequences of a disaster, samples taken at two different times would show the same proportion - whether some people pertain to both samples or not.

 

Let me try to explain why it is not the case. Assume that you have cancer cases in the first time point of children below 18. Assume further that one year later you measure the same cohort. Unless all your cases were 18 the last year, you will count them and add new cases to the set, as cancer risk is cumulative with age. Your assumption of equilibrium (which does not matter anyway with that set) is flawed as obviously only one group leaves the set, those 18 at the first measurement. Every group before stays.

 

Unless you account for that (and other factors as birth etc.) you cannot blindly use a trend built on it.

Again, the data set as described is not independent.

 

Once I have more time I could walk you through the statistical errors you are committing, if you are really interested.

Link to comment
Share on other sites

Maye a simpler example will explain you better why a prevalence doesn't increase over time in an equilibrium situation.

 

Let's take a village of constant population over generations. Someone makes studies over the group of inhabitants aged 18 or less. His study topic is just the proportion of people over 10 in this group.

 

Obviously, someone attaining 10 will not regress under 10 - just like someone diagnosed with thyroid cancer will not improve spontaneously. Nevertheless, apart from fluctuations, the study made 5 years later, or 20 years later, or 100 years later, will find the same proportion of young people above 10 in the 0-18 group: approximately 8/18.

 

This is because people near 18 leave the study group just like around Fukushima, and new people enter the group, and this compensates the evolution that lets individuals pass from the <10 subgroup to the >10 one. Whether the successive study groups overlap at few years distance, or not at two generations distance, doesn't change the result.

 

Exactly the same way, the proportion of people under 18 affected by a thyroid cancer does not increase naturaly over time. This does need a new cause: the radioiodine pollution by the nuclear catastrophy at Fukushima dai-ichi.

 

I hope this is now clear enough to everybody.

 

---------------------------------------------------------

 

Thyroid cancer isn't a nice thing neither. It is generally well cured, but by removing the thyroid altogether, which implies a treatment for the rest of the life. Also, that study finds that more aggressive cancers result from the same exposition:

http://www.ucsf.edu/news/2014/10/120011/radiation-exposure-linked-aggressive-thyroid-cancers

 

---------------------------------------------------------

 

"No additional thyroid cancers observed in the first 4 years around Chernobyl" is a lie. The numbers did increase - they were just too small to be convincing, 2/yr becoming 4/yr and 5/yr.

 

While such small numbers do not prove a definite increase, they can even less exclude an increase. The experience of Chernobyl certainly cannot serve to dismiss a relationship between the early increase around Fukushima and the radioactive pollution.

 

---------------------------------------------------------

 

"No statistics in Japan before the catastrophy" was an other lie. It was surprising enough, and here comes the answer, paper there citing IARC itself:

http://www.scielo.br/scielo.php?pid=s0004-27302007000500012&script=sci_arttext

http://www.scielo.br/img/revistas/abem/v51n5/a11tab1f.gif

 

Parkin DM, Kramárová E, Draper GJ, Masuyer E, Michaelis J, Neglia J, et al. (eds). International incidence of childhood cancer. IARC Scientific Publication No 144. Lyon: IARCPress, 1999.

 

post-53915-0-87488200-1426156584_thumb.gif

Click to magnify

 

It gives incidences (new cases per year) for varied countries including Japan, where it is rather low at 1.1ppm/year between 10-14yr and 0.1ppm/yr <10yr. Differences aren't huge between the countries neither, so we could extrapolate to 14-18yr using detailed data from an other country if any necessary.

 

Converting the incidences in a prevalence, assuming an equal distribution of people among the ages 0-18:

1.1ppm*(1+2+3+4+5+6+7+8+9)+0.1ppm*(10+11+12+13+14+15+16+17+18+19)=64ppm

so the pre-catastrophy statistics would predict 19 cases among 298,577 children around Fukushima, not the observed 86.

Edited by Enthalpy
Link to comment
Share on other sites

 

Obviously, someone attaining 10 will not regress under 10 - just like someone diagnosed with thyroid cancer will not improve spontaneously. Nevertheless, apart from fluctuations, the study made 5 years later, or 20 years later, or 100 years later, will find the same proportion of young people above 10 in the 0-18 group: approximately 8/18.

 

You are missing the point that you are comparing data merely a year apart, how much resampling is being done in that group? Unless you can answer that your assertions are simply wrong. But as I do not see any interest in discussing the issue I will assume that the discussion will lead nowhere. Likewise, the blatant misunderstanding of the Chernobyl cases.

 

 

While such small numbers do not prove a definite increase, they can even less exclude an increase. The experience of Chernobyl certainly cannot serve to dismiss a relationship between the early increase around Fukushima and the radioactive pollution.

 

This nicely sums up your basis misunderstanding what the purpose of statistical analyses are for. But as it is clear that you have no real interest in learning how a proper analysis could be done, and your use of spurious associations (and that is generous considering that there is no statistic whatsoever) indicates that you are just looking for someone to support your assertions.

 

I guess this makes the thread useless until more papers are published (such as Watanobe et al. PLoS One. 2014 Dec 4;9(12) or Shibuya et al Lancet 383:9932 p1883-1884 who iterate some of the points I made; Tronke and Mykola Thyroid 2014 24:10 p 1547-1548 with a very preliminary assessment in comparison with Chernobyl; but seriously, why conduct a proper study when a few numbers and gut feeling can work as well?)

 

Also, to whom are you attributing the quotes that you considered to be lies?

Edited by CharonY
Link to comment
Share on other sites

They've done a comparison of thyroid anomalies with children far away from Fukushima, and found that there is no indication the Fukushima cases were caused by the accident.

 

The three tested prefectures, far from Fukushima, were Nagasaki, Aomori and Yamanashi. While the percentage of Fukushima children with detectible nodules/cysts was 41.2%, the combined percentage found in the other three prefectures was 56.6%! Further, while 0.6% of the Fukushima children with the anomalies were considered worthy of further testing, the other three prefectures had a rate of over 1%

 

http://www.hiroshimasyndrome.com/fukushima-child-thyroid-issue.html

 

 

Also, there's this

 

If you're going to screen that many children, you're going to find more cases than you normally [would], because you're looking for something. I suspect if you took the same number of children in Montana and did the same [screening], you'd probably find a similar ratio.

 

http://news.nationalgeographic.com/news/2014/03/140313-fukushima-nuclear-accident-cancer-cluster-thyroid-chernobyl/

 

IOW, they're screening everyone, not just getting statistics from people sick enough to get medical attention.

Link to comment
Share on other sites

I cited it rather than just doing the calculation for two reasons.

First, I'm lazy.

Second, the wiki page explains why it is very difficult to provide a meaningful comparison.

 

Be very wary of drawing any strong conclusions from the data sets.

Link to comment
Share on other sites

I cited it rather than just doing the calculation for two reasons.

First, I'm lazy.

Second, the wiki page explains why it is very difficult to provide a meaningful comparison.

 

Be very wary of drawing any strong conclusions from the data sets.

 

 

Any mention of radiation often brings on panic and hyperbole, the data while not reassuring is a rough comparison...

Link to comment
Share on other sites

One of the extremely tricky bits is of course to correlate the total radiation, absorbed radiation and biological damage (which also has to take source into account). That is why we have all these different measures (becquerel/sievert/gray), which can confuse matters a lot.

 

That being said, Hiroshima was estimated to be 8-11YBq; Fukushima (according to Steinhauser et al Science of The Total Environment 470–471, 2014, P 800–817) 520 PBq. But again, that alone does not allow assessment of biological effects many other parameters (including timing, localization and spread of release) would severely affect the actual radiation damage.

 

Radiation damage is simply not easy to assess at all and any short-term conclusions would have to be taken with a grain of salt until some larger longitudinal studies are available.

Edited by CharonY
Link to comment
Share on other sites

 

You are missing the point that you are comparing data merely a year apart, how much resampling is being done in that group? Unless you can answer that your assertions are simply wrong. But as I do not see any interest in discussing the issue I will assume that the discussion will lead nowhere. Likewise, the blatant misunderstanding of the Chernobyl cases. [...]

 

Your blunder about prevalence claimed to always increase shows you don't understand statistics.

Suggesting that you do and I don't can only worsen the impression you make on readers.

May I politely suggest that you use a more modest language?

They've done a comparison of thyroid anomalies with children far away from Fukushima, and found that there is no indication the Fukushima cases were caused by the accident.

 

Yes, it is one of the studies I checked, and its opinion radically differs.

 

But on the other hand, the number of thyroid cancers among children around Fukushima does increase quickly and is way above the pre-catastrophy figure in Japan, and way above the figures in other countries.

 

So which one is right? Statistical noise can't explain the differences. At least, are all studies of good faith?

[...] Hiroshima was estimated to be 8-11YBq; Fukushima (according to Steinhauser et al Science of The Total Environment 470–471, 2014, P 800–817) 520 PBq. [...]

 

The YBq at Hiroshima include all radioelements, especially those with extremely short half-life which, as they act only very near the explosion center, add no damage to people. It is something that the late director of the plant lindered, as his decisions delayed the explosion of the reactors, so the sub-second to day-lived radioelements had already disappeared.

 

If comparing the radiocaesiums, Fukushima has released 100 to 5,000 times (the lower figure convincing me better) more radioactivity than Hiroshima. Logically enough, since few kg uranium react in a bomb, while a reactor accumulates products from 100s kg uranium.

Link to comment
Share on other sites

Yes, it is one of the studies I checked, and its opinion radically differs.

 

It'd be nice if you linked to these studies, so that everyone could check.

 

But on the other hand, the number of thyroid cancers among children around Fukushima does increase quickly and is way above the pre-catastrophy figure in Japan, and way above the figures in other countries.

 

So which one is right? Statistical noise can't explain the differences. At least, are all studies of good faith?

Not noise, per se, but you may be comparing apples to oranges. You can't compare Fukushima to anywhere else, because the sampling isn't being done in the same way. And within Fukushima, if the samples are overlapping then the later positive results being larger is to be expected.

Link to comment
Share on other sites

In addition to what swansont said, the issue is that we are dealing with an incomplete dataset. I.e. we have partial dependent (resampled part of the set) as well as independent data. A proper analysis would have to take that into account. Note that so far no approach has been suggested, much less provided.

But if OP had proposed a statistical test, it would have been easy to demonstrate why the assumptions are violated. Unless, of course a proper method for incomplete sets such as proposed by Choi and Stablein (1982), although one would need more information on the dataset, such as, e.g. how many repeat samples the set contained.

 

A comparison with a set outside of Fukushima would only make sense if the same proportion of the population had been screened (also a point in the Lancet paper). I also echo the request for publications. I believe that I actually saw one, but IIRC it was a rather limited data set but now I am not even sure whether it was a proper paper at all (as I cannot find it).

Edited by CharonY
Link to comment
Share on other sites

The pre-catastrophy data shows very little difference between Japan in general and Osaka (and far less difference between the countries than expected). This gives me confidence that it applies to Fukuhima as well - unless, of course, the power plant had already polluted the prefecture before.

 

The partial overlap of the samples is not a concern, because both samples begin at 0 years and end at 18 years. In the village example, you get the same proportion whether you iterate the study 2 years, 10 years or 50 years later.

 

Well, here we have discrepancies of 15 standard deviations between the pre-catastrophy and the 2014 studies - nothing to fine-tune with subtle arguments.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.