Misleading statistics

In his book, Welch gives several examples of how the public is misled by
some (true) statistics. I will review some of these examples. They all concern cancer
screening but there are of course many other situations where these examples apply.


Relative versus absolute risk. For 1,000 women at age 50 there are 6 who will
die of breast cancer in the next 10 years. This is an absolute risk. Proponents of
mammographies claim that only 4 women will die if they undergo regular mammographies.
But their preferred way to present this is to say that regular mammographies reduce
breast cancer deaths by 1/3. Switching to relative risk (that is, comparing the
reduction to the actual risk) gives the impression of a
big gain (1/3 reduction looks quite impressive). In reality the gain cannot
be big since the absolute risk (6 deaths for 1,000 women over 10 years) is small.

Counting breast cancer deaths. This one looks trivial: it is either a breast cancer
death or it is not. Actually, things are more complicated than they look. What if
a woman dies during treatment? There are certainly a number of deaths due to treatment.

They are usually not attributed to breast

cancer (strictly speaking they are not cancer deaths) but then the reduction in breast cancer
deaths may be artificial: we are moving a death from the breast cancer column into another
column but the total death rate remains constant! The problem again is that because we are dealing
with very small numbers every decision about what to count is very important. This is also why statisticians have been arguing for decades over
whether breast cancer screening works or not. But even those in favor of mammography agree that
the benefit is small. See Berry's paper, International Journal of Epidemiology,
(2004), vol. 33, page 68 and see Chapter 9 in Welch's book.

Five-year survival. This one is a nice trick. Almost all of Chapter 8 in Welch's book
is concerned with five-year survival statistics and why they are misleading. He gives
many examples. I will concentrate on the comparison between the U.S. and the U.K. for
prostate cancer.
In the early 90's the five year survival rate was about 40% in the
United Kingdom. In the U.S.A., on the other hand, the survival rate was 90%. Obvious
conclusion? Prostate cancer screening saves lives (screening was in place in the U.S.A. but
not in the U.K.). Little problem: the U.S.A. mortality due to prostate cancer was HIGHER
(not lower!) than the U.K. one! So what is going on? The only reason the 5 year survival
was higher in the U.S.A was BECAUSE of the screening program. Perfectly healthy 50 years (and up)
men were told they had prostate cancer in the U.S.A although they had no symptoms of the disease.
In the U.K. the only ones who made the statistics were people
already sick since there was no screening. In other words, we are comparing two different
populations: in the U.S. all the men 50 and above (sick or healthy) and in the U.K. only sick people.
Of course, the 5 year survival of the sick is not as good as the 5 year survival of the general
population! This is why the only statistic that matters is the prostate cancer death rate.