lawyers
This post arose from a recent meeting at the Royal Society. It was organised by Julie Maxton to discuss the application of statistical methods to legal problems. I found myself sitting next to an Appeal Court Judge who wanted more explanation of the ideas. Here it is.
Some preliminaries
The papers that I wrote recently were about the problems associated with the interpretation of screening tests and tests of significance. They don’t allude to legal problems explicitly, though the problems are the same in principle. They are all open access. The first appeared in 2014:
http://rsos.royalsocietypublishing.org/content/1/3/140216
Since the first version of this post, March 2016, I’ve written two more papers and some popular pieces on the same topic. There’s a list of them at http://www.onemol.org.uk/?page_id=456.
I also made a video for YouTube of a recent talk.
In these papers I was interested in the false positive risk (also known as the false discovery rate) in tests of significance. It turned out to be alarmingly large. That has serious consequences for the credibility of the scientific literature. In legal terms, the false positive risk means the proportion of cases in which, on the basis of the evidence, a suspect is found guilty when in fact they are innocent. That has even more serious consequences.
Although most of what I want to say can be said without much algebra, it would perhaps be worth getting two things clear before we start.
The rules of probability.
(1) To get any understanding, it’s essential to understand the rules of probabilities, and, in particular, the idea of conditional probabilities. One source would be my old book, Lectures on Biostatistics (now free), The account on pages 19 to 24 give a pretty simple (I hope) description of what’s needed. Briefly, a vertical line is read as “given”, so Prob(evidence | not guilty) means the probability that the evidence would be observed given that the suspect was not guilty.
(2) Another potential confusion in this area is the relationship between odds and probability. The relationship between the probability of an event occurring, and the odds on the event can be illustrated by an example. If the probability of being right-handed is 0.9, then the probability of being not being right-handed is 0.1. That means that 9 people out of 10 are right-handed, and one person in 10 is not. In other words for every person who is not right-handed there are 9 who are right-handed. Thus the odds that a randomly-selected person is right-handed are 9 to 1. In symbols this can be written
\[ \mathrm{probability=\frac{odds}{1 + odds}} \]
In the example, the odds on being right-handed are 9 to 1, so the probability of being right-handed is 9 / (1+9) = 0.9.
Conversely,
\[ \mathrm{odds =\frac{probability}{1 – probability}} \]
In the example, the probability of being right-handed is 0.9, so the odds of being right-handed are 0.9 / (1 – 0.9) = 0.9 / 0.1 = 9 (to 1).
With these preliminaries out of the way, we can proceed to the problem.
The legal problem
The first problem lies in the fact that the answer depends on Bayes’ theorem. Although that was published in 1763, statisticians are still arguing about how it should be used to this day. In fact whenever it’s mentioned, statisticians tend to revert to internecine warfare, and forget about the user.
Bayes’ theorem can be stated in words as follows
\[ \mathrm{\text{posterior odds ratio} = \text{prior odds ratio} \times \text{likelihood ratio}} \]
“Posterior odds ratio” means the odds that the person is guilty, relative to the odds that they are innocent, in the light of the evidence, and that’s clearly what one wants to know. The “prior odds” are the odds that the person was guilty before any evidence was produced, and that is the really contentious bit.
Sometimes the need to specify the prior odds has been circumvented by using the likelihood ratio alone, but, as shown below, that isn’t a good solution.
The analogy with the use of screening tests to detect disease is illuminating.
Screening tests
A particularly straightforward application of Bayes’ theorem is in screening people to see whether or not they have a disease. It turns out, in many cases, that screening gives a lot more wrong results (false positives) than right ones. That’s especially true when the condition is rare (the prior odds that an individual suffers from the condition is small). The process of screening for disease has a lot in common with the screening of suspects for guilt. It matters because false positives in court are disastrous.
The screening problem is dealt with in sections 1 and 2 of my paper. or on this blog (and here). A bit of animation helps the slides, so you may prefer the Youtube version.
The rest of my paper applies similar ideas to tests of significance. In that case the prior probability is the probability that there is in fact a real effect, or, in the legal case, the probability that the suspect is guilty before any evidence has been presented. This is the slippery bit of the problem both conceptually, and because it’s hard to put a number on it.
But the examples below show that to ignore it, and to use the likelihood ratio alone, could result in many miscarriages of justice.
In the discussion of tests of significance, I took the view that it is not legitimate (in the absence of good data to the contrary) to assume any prior probability greater than 0.5. To do so would presume you know the answer before any evidence was presented. In the legal case a prior probability of 0.5 would mean assuming that there was a 50:50 chance that the suspect was guilty before any evidence was presented. A 50:50 probability of guilt before the evidence is known corresponds to a prior odds ratio of 1 (to 1) If that were true, the likelihood ratio would be a good way to represent the evidence, because the posterior odds ratio would be equal to the likelihood ratio.
It could be argued that 50:50 represents some sort of equipoise, but in the example below it is clearly too high, and if it is less that 50:50, use of the likelihood ratio runs a real risk of convicting an innocent person.
The following example is modified slightly from section 3 of a book chapter by Mortera and Dawid (2008). Philip Dawid is an eminent statistician who has written a lot about probability and the law, and he’s a member of the legal group of the Royal Statistical Society.
My version of the example removes most of the algebra, and uses different numbers.
Example: The island problem
The “island problem” (Eggleston 1983, Appendix 3) is an imaginary example that provides a good illustration of the uses and misuses of statistical logic in forensic identification.
A murder has been committed on an island, cut off from the outside world, on which 1001 (= N + 1) inhabitants remain. The forensic evidence at the scene consists of a measurement, x, on a “crime trace” characteristic, which can be assumed to come from the criminal. It might, for example, be a bit of the DNA sequence from the crime scene.
Say, for the sake of example, that the probability of a random member of the population having characteristic x is P = 0.004 (i.e. 0.4% ), so the probability that a random member of the population does not have the characteristic is 1 – P = 0.996. The mainland police arrive and arrest a random islander, Jack. It is found that Jack matches the crime trace. There is no other relevant evidence.
How should this match evidence be used to assess the claim that Jack is the murderer? We shall consider three arguments that have been used to address this question. The first is wrong. The second and third are right. (For illustration, we have taken N = 1000, P = 0.004.)
(1) Prosecutor’s fallacy
Prosecuting counsel, arguing according to his favourite fallacy, asserts that the probability that Jack is guilty is 1 – P , or 0.996, and that this proves guilt “beyond a reasonable doubt”.
The probability that Jack would show characteristic x if he were not guilty would be 0.4% i.e. Prob(Jack has x | not guilty) = 0.004. Therefore the probability of the evidence, given that Jack is guilty, Prob(Jack has x | Jack is guilty), is one 1 – 0.004 = 0.996.
But this is Prob(evidence | guilty) which is not what we want. What we need is the probability that Jack is guilty, given the evidence, P(Jack is guilty | Jack has characteristic x).
To mistake the latter for the former is the prosecutor’s fallacy, or the error of the transposed conditional.
Dawid gives an example that makes the distinction clear.
“As an analogy to help clarify and escape this common and seductive confusion, consider the difference between “the probability of having spots, if you have measles” -which is close to 1 and “the probability of having measles, if you have spots” -which, in the light of the many alternative possible explanations for spots, is much smaller.”
(2) Defence counter-argument
Counsel for the defence points out that, while the guilty party must have characteristic x, he isn’t the only person on the island to have this characteristic. Among the remaining N = 1000 innocent islanders, 0.4% have characteristic x, so the number who have it will be NP = 1000 x 0.004 = 4 . Hence the total number of islanders that have this characteristic must be 1 + NP = 5 . The match evidence means that Jack must be one of these 5 people, but does not otherwise distinguish him from any of the other members of it. Since just one of these is guilty, the probability that this is Jack is thus 1/5, or 0.2— very far from being “beyond all reasonable doubt”.
(3) Bayesian argument
The probability of the having characteristic x (the evidence) would be Prob(evidence | guilty) = 1 if Jack were guilty, but if Jack were not guilty it would be 0.4%, i.e. Prob(evidence | not guilty) = P. Hence the likelihood ratio in favour of guilt, on the basis of the evidence, is
\[ LR=\frac{\text{Prob(evidence } | \text{ guilty})}{\text{Prob(evidence }|\text{ not guilty})} = \frac{1}{P}=250 \]
In words, the evidence would be 250 times more probable if Jack were guilty than if he were innocent. While this seems strong evidence in favour of guilt, it still does not tell us what we want to know, namely the probability that Jack is guilty in the light of the evidence: Prob(guilty | evidence), or, equivalently, the odds ratio -the odds of guilt relative to odds of innocence, given the evidence,
To get that we must multiply the likelihood ratio by the prior odds on guilt, i.e. the odds on guilt before any evidence is presented. It’s often hard to get a numerical value for this. But in our artificial example, it is possible. We can argue that, in the absence of any other evidence, Jack is no more nor less likely to be the culprit than any other islander, so that the prior probability of guilt is 1/(N + 1), corresponding to prior odds on guilt of 1/N.
We can now apply Bayes’s theorem to obtain the posterior odds on guilt:
\[ \text {posterior odds} = \text{prior odds} \times LR = \left ( \frac{1}{N}\right ) \times \left ( \frac{1}{P} \right )= 0.25 \]
Thus the odds of guilt in the light of the evidence are 4 to 1 against. The corresponding posterior probability of guilt is
\[ Prob( \text{guilty } | \text{ evidence})= \frac{1}{1+NP}= \frac{1}{1+4}=0.2 \]
This is quite small –certainly no basis for a conviction.
This result is exactly the same as that given by the Defence Counter-argument’, (see above). That argument was simpler than the Bayesian argument. It didn’t explicitly use Bayes’ theorem, though it was implicit in the argument. The advantage of using the former is that it looks simpler. The advantage of the explicitly Bayesian argument is that it makes the assumptions more clear.
In summary The prosecutor’s fallacy suggested, quite wrongly, that the probability that Jack was guilty was 0.996. The likelihood ratio was 250, which also seems to suggest guilt, but it doesn’t give us the probability that we need. In stark contrast, the defence counsel’s argument, and equivalently, the Bayesian argument, suggested that the probability of Jack’s guilt as 0.2. or odds of 4 to 1 against guilt. The potential for wrong conviction is obvious.
Conclusions.
Although this argument uses an artificial example that is simpler than most real cases, it illustrates some important principles.
(1) The likelihood ratio is not a good way to evaluate evidence, unless there is good reason to believe that there is a 50:50 chance that the suspect is guilty before any evidence is presented.
(2) In order to calculate what we need, Prob(guilty | evidence), you need to give numerical values of how common the possession of characteristic x (the evidence) is the whole population of possible suspects (a reasonable value might be estimated in the case of DNA evidence), We also need to know the size of the population. In the case of the island example, this was 1000, but in general, that would be hard to answer and any answer might well be contested by an advocate who understood the problem.
These arguments lead to four conclusions.
(1) If a lawyer uses the prosecutor’s fallacy, (s)he should be told that it’s nonsense.
(2) If a lawyer advocates conviction on the basis of likelihood ratio alone, s(he) should be asked to justify the implicit assumption that there was a 50:50 chance that the suspect was guilty before any evidence was presented.
(3) If a lawyer uses Defence counter-argument, or, equivalently, the version of Bayesian argument given here, (s)he should be asked to justify the estimates of the numerical value given to the prevalence of x in the population (P) and the numerical value of the size of this population (N). A range of values of P and N could be used, to provide a range of possible values of the final result, the probability that the suspect is guilty in the light of the evidence.
(4) The example that was used is the simplest possible case. For more complex cases it would be advisable to ask a professional statistician. Some reliable people can be found at the Royal Statistical Society’s section on Statistics and the Law.
If you do ask a professional statistician, and they present you with a lot of mathematics, you should still ask these questions about precisely what assumptions were made, and ask for an estimate of the range of uncertainty in the value of Prob(guilty | evidence) which they produce.
Postscript: real cases
Another paper by Philip Dawid, Statistics and the Law, is interesting because it discusses some recent real cases: for example the wrongful conviction of Sally Clark because of the wrong calculation of the statistics for Sudden Infant Death Syndrome.
On Monday 21 March, 2016, Dr Waney Squier was struck off the medical register by the General Medical Council because they claimed that she misrepresented the evidence in cases of Shaken Baby Syndrome (SBS).
This verdict was questioned by many lawyers, including Michael Mansfield QC and Clive Stafford Smith, in a letter. “General Medical Council behaving like a modern inquisition”
The latter has already written “This shaken baby syndrome case is a dark day for science – and for justice“..
The evidence for SBS is based on the existence of a triad of signs (retinal bleeding, subdural bleeding and encephalopathy). It seems likely that these signs will be present if a baby has been shake, i.e Prob(triad | shaken) is high. But this is irrelevant to the question of guilt. For that we need Prob(shaken | triad). As far as I know, the data to calculate what matters are just not available.
It seem that the GMC may have fallen for the prosecutor’s fallacy. Or perhaps the establishment won’t tolerate arguments. One is reminded, once again, of the definition of clinical experience: “Making the same mistakes with increasing confidence over an impressive number of years.” (from A Sceptic’s Medical Dictionary by Michael O’Donnell. A Sceptic’s Medical Dictionary BMJ publishing, 1997).
Appendix (for nerds). Two forms of Bayes’ theorem
The form of Bayes’ theorem given at the start is expressed in terms of odds ratios. The same rule can be written in terms of probabilities. (This was the form used in the appendix of my paper.) For those interested in the details, it may help to define explicitly these two forms.
In terms of probabilities, the probability of guilt in the light of the evidence (what we want) is
\[ \text{Prob(guilty } | \text{ evidence}) = \text{Prob(evidence } | \text{ guilty}) \frac{\text{Prob(guilty })}{\text{Prob(evidence })} \]
In terms of odds ratios, the odds ratio on guilt, given the evidence (which is what we want) is
\[ \frac{ \text{Prob(guilty } | \text{ evidence})} {\text{Prob(not guilty } | \text{ evidence}} =
\left ( \frac{ \text{Prob(guilty)}} {\text {Prob((not guilty)}} \right )
\left ( \frac{ \text{Prob(evidence } | \text{ guilty})} {\text{Prob(evidence } | \text{ not guilty}} \right ) \]
or, in words,
\[ \text{posterior odds of guilt } =\text{prior odds of guilt} \times \text{likelihood ratio} \]
This is the precise form of the equation that was given in words at the beginning.
A derivation of the equivalence of these two forms is sketched in a document which you can download.
Follow-up
23 March 2016
It’s worth pointing out the following connection between the legal argument (above) and tests of significance.
(1) The likelihood ratio works only when there is a 50:50 chance that the suspect is guilty before any evidence is presented (so the prior probability of guilt is 0.5, or, equivalently, the prior odds ratio is 1).
(2) The false positive rate in signiifcance testing is close to the P value only when the prior probability of a real effect is 0.5, as shown in section 6 of the P value paper.
However there is another twist in the significance testing argument. The statement above is right if we take as a positive result any P < 0.05. If we want to interpret a value of P = 0.047 in a single test, then, as explained in section 10 of the P value paper, we should restrict attention to only those tests that give P close to 0.047. When that is done the false positive rate is 26% even when the prior is 0.5 (and much bigger than 30% if the prior is smaller –see extra Figure), That justifies the assertion that if you claim to have discovered something because you have observed P = 0.047 in a single test then there is a chance of at least 30% that you’ll be wrong. Is there, I wonder, any legal equivalent of this argument?
It’s good to see the BMJ joining the campaign for free speech (only a month or two behind the blogs). The suing of Simon Singh for defamation by the British Chiropractic Association has stirred up a hornet’s nest that could (one hopes) change the law of the land, and destroy chiropractic altogether. The BMJ’s editor, Fiona Godlee, has a fine editorial, Keep the libel laws out of science. She starts “I hope all readers of the BMJ are signed up to organised scepticism” and says
“Weak science sheltered from criticism by officious laws means bad medicine. Singh is determined to fight the lawsuit rather than apologise for an article he believes to be sound. He and his supporters have in their sights not only the defence of this case but the reform of England’s libel laws.”
Godlee refers to equally fine articles by the BMJ’s Deputy Editor, Tony Delamothe, Thinking about Charles II“, and by Editor in Chief, Harvey Marcovitch, “Libel law in the UK“.
The comments on Godlee’s editorial, show strong support, (apart from one from the infamous quantum fantasist, Lionel Milgrom). But there was one that slighlty surprised me, from Tricia Greenhalgh, author of the superb book, “How to read a paper”. She comments
“the use of the term ‘bogus’ seems both unprofessional and unscholarly. The argument would be stronger if expressed in more reserved terms”
That set me thinking, not for the first time, about the difference between journalism and scholarship. I can’t imagine ever using a word like ‘bogus’ in a paper about single ion channels. But Singh was writing in a newspaper, not in a scientific paper. Even more to the point, his comments were aimed at people who are not scholars and who, quite explicitly reject the normal standards of science and evidence. The scholarly approach has been tried for centuries, and it just doesn’t work with such people. I’d defend Singh’s language. It is the only way to have any effect. That is why I sent the following comment.
The ultimate irony is that the comment was held up by the BMJ’s lawyers, and has still to appear.
Thanks for an excellent editorial. I doubt that it’s worth replying to Lionel Milgrom whose fantasy physics has been totally demolished by real physicists. Trisha Greenhalgh is, though, someone whose views I’d take very seriously. She raises an interesting question when she says “bogus” is an unprofessional word to use. Two things seem relevant. First, there is little point in writing rational scholarly articles for a group of people who do not accept the ordinary rules of evidence or scholarship. We are dealing with fantasists. Worse still, we are dealing with fantasists whose income depends on defending their fantasies. You can point out to your heart’s content that “subluxations” are figment of the chiropractors’ imagination, but they don’t give a damn. They aren’t interested in what’s true and what isn’t. Throughout my lifetime, pharmacologists and others have been writing scholarly articles about how homeopathy and other sorts of alternative medicine are bogus. All this effort had little effect. What made the difference was blogs and investigative journalism. When it became possible to reveal leaked teaching materials that taught students that “amethysts emit high yin energy“, and name and shame the vice-chancellors who allow that sort of thing to happen (in this case Prof Geoffrey Petts of Westminster University), things started to happen. In the last few years all five “BSc” degrees in homeopathy have closed and that is undoubtedly a consequence of the activities of bloggers and can assess evidence but who work more like investigative journalists. When the BCA released, 15 months after the event, its “plethora of evidence” a semi-organised effort by a group of bloggers produced, in less than 24 hours, thoroughly scholarly analyses of all of them (there is a summary here). As the editorial says, they didn’t amount to a hill of beans, They also pointed out the evidence that was omitted by the BCA. The conventional press just followed the bloggers. I find it really rather beautiful that a group of people who have other jobs to do, spent a lot of time doing these analyses, unpaid, in their own time, simply to support Singh, because they believed it is the right thing to do. Simon Singh has analysed the data coolly in his book. But In the case that gave rise to the lawsuit he was writing in a newspaper. It was perfectly clear from the context what ‘bogus’ meant. but Mr Justice Eady (aided by a disastrous law) chose to ignore entirely the context and the question of truth. The description ‘bogus’. as used by Singh, seems to be entirely appropriate for a newspaper article. To criticise him for using “unprofessional” language is inappropriate because we are not dealing with professionals. At the heart of the problem is the sort of stifling political correctness that has resulted in quacks being referred to as “professions” rather than fantasists and fraudsters [of course I use the word fraudster with no implication that it necessarily implies conscious lying]. At least there are some laughs to be had from the whole sorry affair. Prompted by that prince among lawyers known as Jack of Kent there was a new addition to my ‘Patients’ Guide to Magic Medicine‘, as featured in the Financial Times.
|
It is, perhaps, misplaced political correctness that lies at the heart of the problem. Who can forget the letter from Lord Hunt, while he was at the Department of Health, in which he described “psychic surgery” (one of the best known fraudulent conjuring tricks) as a “profession”.
Follow-up
Two days later, the comment has appeared in the BMJ at last. But it has been altered a bit.
Unprofessional language is appropriate when dealing with unprofessional people Thanks for an excellent editorial. I doubt that it’s worth replying to Lionel Milgrom whose fantasy physics has been totally demolished by real physicists. Trisha Greenhalgh is, though, someone whose views I’d take very seriously. She raises an interesting question when she says “bogus” is an unprofessional word to use. Two things seem relevant. First, there is little point in writing rational scholarly articles for a group of people who do not seem to accept the ordinary rules of evidence or scholarship. You can point out to your heart’s content that “subluxations” are figment of the chiropractors’ imagination, but they don’t give a damn. Throughout my lifetime, pharmacologists and others have been writing scholarly articles about how homeopathy and other sorts of alternative medicine are bogus. All this effort had little effect. What made the difference was blogs and investigative journalism. When it became possible to reveal leaked teaching materials that taught students that “amethysts emit high yin energy”, and name and shame the vice-chancellors who allow that sort of thing to happen (in this case Prof Geoffrey Petts of Westminster University), things started to happen. In the last few years all five “BSc” degrees in homeopathy have closed and that is undoubtedly a consequence of the activities of bloggers and can assess evidence but who work more like investigative journalists. When the BCA released, 15 months after the event, its “plethora of evidence” a semi-organised effort by a group of bloggers produced, in less than 24 hours, thoroughly scholarly analyses of all of them (there is a summary here). As the editorial says, they didn’t amount to a hill of beans, They also pointed out the evidence that was omitted by the BCA. The conventional press just followed the bloggers. I find it really rather beautiful that a group of people who have other jobs to do, spent a lot of time doing these analyses, unpaid, in their own time, simply to support Singh, because they believed it is the right thing to do. Simon Singh has analysed the data coolly in his book. But In the case that gave rise to the lawsuit he was writing in a newspaper. It was perfectly clear from the context what ‘bogus’ meant. but Mr Justice Eady (aided by a disastrous law) chose to ignore entirely the context. The description ‘bogus’. as used by Singh, seems to be entirely appropriate for a newspaper article. To criticise him for using “unprofessional” language is inappropriate because we are not dealing with professionals. At least there are some laughs to be had from the whole sorry affair. Prompted by that prince among lawyers known as Jack of Kent there was a new addition to my ‘Patients’ Guide to Magic Medicine’, as featured in the Financial Times.
|
Here are the changes that were made. Hmm.very interesting.
I’m a bit late on this one, but better late than never.
The opinionated and ill-informed actress turned talk show host, Jeni Barnett, spent an hour or so endangering your children (and hers) with what most surely be one of the worst ever accounts of measles vaccination.
|
Chart from BBC report |
She was abominably rude to a well-informed nurse who phoned in to try to inject some sense into the conversation.
The LBC tried to stop Ben Goldacre from publicising this horrific show by legal action.
Blogs are the new journalism. The response has been wonderful. People of all ages sat up late into the night transcribing the entire broadcast. Unlike the doubtless highly-paid actress, they did it as a public service. They were not paid by anyone. It is all rather beautiful.. Within a day of the legal notice being sent to Goldacre, the offensive broadcast has spread like wildfire over the web.
The result of all this hard work is that if you type ‘Jeni Barnett MMR’ into Google, every item but one on the first page links to the sites that are highly critical of Barnett’s irresponsible and ill-mannered rant (at 7 am on 7 Feb).
You can listen to the entire broadcast here. Or read the entire transcript here.
The many people who have put work into this effort are listed, for example, on Ben Goldacre’s own site.
Holfordwatch lists many links, and also lists previous attempts of lawyers to suppress science.
When will people learn that lawyers are not the proper way to settle matters of truth and falsehood.
Dice, n . Small polka-dotted cubes of ivory, constructed like a lawyer to lie on any side, but commonly the wrong one. [ Bierce, Ambrose , The Enlarged Devil’s Dictionary , 1967]
Follow-up
The list of commentators, on Holfordwatch, grows by the minute. The story rapidly spread to the USA: for example the excellent Orac has spoken eloquently.
The condemnation extends far beyond the usual bad medicine writers. Anyone who wants to speak the truth as they see it sees legal actions like these as a threat to freedom of speech. A side effect is that I learned about several new blogs.
One, with a name as good as its content is A Somewhat Old, But Capacious Handbag, written by (you guessed it) Miss Prism, has Today’s irresponsible tripe courtesy of Jeni Barnett.”
Another one that was new to me is the Black Triangle blog, written by Dr Anthony Cox (a pharmacovigilance pharmacist). He writes in Conspiracy?
Anti-vaccinators have exploited the internet for years. Websites, blogs, and forums are widely used by activists to promote their wrong-headed cause. However, when the pro-science pro-vaccine lobby use similar methods a common accusation is leveled at them. Here it is posted at JABS, the UK’s leading anti-vaccine website.
“There is no way all of this could have happened so quickly without Pharma backing.” “
That is really priceless. These anti-vaccination fanatics just don’t seem to be able to grasp that there is a big army of people who care so much about the public interest that they do all this for no money and a considerable cost to themselves in time and lost sleep.
Besides which, anyone who thinks that a big corporation could whip up so much support and activity in 24 hours obviously has a rather better opinion of the efficiency of big companies than I do. They’d need 25 meetings and an awayday in Majorca before anything happened . Even a university can do better than that (perhaps only 20 meetings and an awayday in Uxbridge). One does wonder why, then, universities are always being told to be more like businesses. But that is another story.
Anthony Cox also deals with another of my favourite topics in The Today Programme’s irresponsible MMR interview. I listen to the Today Programme, I listen every morning. But I do wish they could bring their medical reporting up to the same standard as their political reporting. Their policy of ‘equal time for the flat earth society” is not my idea of impartiality.
The Sunday Times for 8th February, by coincidence, has a major article by the excellent investigative reporter, Brian Deer.
An excellent summary has appeared already Dynutrix on Holfordwatch.
Part 1.MMR doctor Andrew Wakefield fixed data on autism .
“However, our investigation, confirmed by evidence presented to the General Medical Council (GMC), reveals that: In most of the 12 cases, the children’s ailments as described in The Lancet were different from their hospital and GP records. Although the research paper claimed that problems came on within days of the jab, in only one case did medical records suggest this was true, and in many of the cases medical concerns had been raised before the children were vaccinated. Hospital pathologists, looking for inflammatory bowel disease, reported in the majority of cases that the gut was normal. This was then reviewed and the Lancet paper showed them as abnormal. “
Part 2. MMR: Key Dates in the Crisis .
Part 3. Most shockingly Hidden records show MMR truth
“A Sunday Times investigation has found that altered data was behind the decade-long scare over vaccination ”
Part 4. How the MMR scare led to the return of measles.
Let’s hope that some of the original documents appear on-line soon.
The Times on 10 February carried a beautifully hard-hitting column by David Aaronovitch: The preposterous prejudice of the anti-MMR lobby
“Last week there was a bust-up in blogland.”
“Last week, justifying herself on her blog, Barnett invoked the spirit of the insurgent ignoramus. Yes, she said, she should have been ready with facts and figures on MMR.”
“That’s why I’m passionately for Goldacre, and why I find myself wondering whether we can file a class action against LBC for permitting a presenter to inflict her preposterous prejudices on her listeners, to the detriment of someone else’s kids.”
Jeni Barnett: have you lost something?. Well well, first Jeni Barnett removed the critical comments from her blog. Then she removed the blog altogether. Seems she isn’t interested in debate at all.
Neither does she understand the internet. You can read the missing blog here, and the invaluable Quackometer has reproduced the whole blog post and all the missing comments. Great work Andy,
Stephen Fry left a comment (#223) on Goldacre’s site.
“The fatuity of the Jeni Barnett woman’s manner – her blend of self-righteousness and stupidity, her simply quite staggering inability to grasp, pursue or appreciate a sequence of logical steps – all these are signature characteristics of Britain these days. The lamentable truth is that most of the population wouldn’t really understand why we get so angry at this assault on reason, logic and sense. But we have to keep hammering away at these people and their superstitious inanities. We have to. Well done you and well done all you supporting. I’ve tweeted this site to my followers. I hope they all do their best to support you. Publish and be damned. We’ll fight them and fight them and fight them in the name of empricism, reason, double blind random testing and all that matters.”
London Evening Standard on 11 February. Nick Cohen on How my friends fell for the MMR panic.
Press Gazette covered the start of the srory on 6th February, here.
MSNBC TV broadcast by Keith Olberman votes Andrew Wakefield as “today’s worst person in the world” on February 10th. Click in video “Vaccine lie puts kids at risk”.
Write to your MP to ask him/her to sign Early Day Motion 754, MMR Vaccine and the Media
David Aaronovitch writes again in the Times, February 14th, “We need an inquiry into how Andrew Wakefield got away with it“.
An editorial in today’s issue of the New Zealand Medical Journal prints in full a letter sent to the Journal by Paul Radich, a lawyer who acts for the New Zealand Chiropractors’ Association Inc and its members. The letter alleges defamation by Andrew Gilbey’s article, and by my editorial which sets the wider context of his paper. The articles in question are here.
Here are some quotations from the Editorial by the Journal’s editor, Professor Frank A Frizelle, Department of Surgery, Christchurch Hospital, NZ. [Download the whole editorial].
In the article by Gilbey, data is provided about use of inappropriate titles by New Zealand practitioners of acupuncture, chiropractic, and osteopathy while the greater context is provided by Colquhoun. The comments made by Paul Radich are entirely consistent with the response as expressed by Professor Edzard Ernst (Editor-in-Chief of Focus on Alternative and Complementary Medicine (FACT) and Chair in Complementary Medicine at the University of Exeter) in his humorous article In praise of the data-free discussion. Towards a new paradigm5 when he states “data can be frightfully intimidating and non-egalitarian”. . . . The Journal has a responsibility to deal with all issues and not to steer clear of those issues that are difficult or contentious or carry legal threats. Let the debate continue in the evidence-based tone set by Colquhoun and others. I encourage, as we have done previously, the chiropractors and others to join in, let’s hear your evidence not your legal muscle. |
My article said nothing that has not been said many times before. I regard it as fair scientific comment, and I believe that expression of those opinions is in the public interest, The reaction of the Journal is thoroughly admirable.
The outcome of legal bullying can be very counterproductive, as the UK’s Society of Homeopaths found recently to their cost.
The lawyers’ letter demanded a response by 11th August, but in the advice of a lawyer I have decided to ignore for now this rather crude attempt to stifle discussion.
For further developments, watch this space.
The story was picked up within hours, It seems that a storm may be brewing round the world for New Zealand Chiropractors. Here are some of them.
Silence Dissent Ben Goldacre’s badscience,net
HolfordWatch Professor Frizelle’s Instant Classic: Let’s hear your evidence not your legal muscle
The first New Zealand site.
More Legal Chill -from spine-cracking chiropractors on jdc325’s blog
And A beginners guide to chiropractic, on the same site.
Andy Lewis’s Quackometer takes a sharp look too, in They are bone doctors aren’t they?
Support from a NZ blog, Evidence-based thought NZ Chiropractors vs NZ Medical Journal
And another New Zealand blog, Chiropractors attack NZ Medical Journal on SillyBeliefs.com
and another: Evidence should trump “legal muscle”, on “Open Parachute. The mind doesn’t work if it’s closed”
New Zealand Doctor magazine. “Kiwi-practors legal wrangle” in the Nature world news blog, The Great Beyond.
“Self-destructing chiropractors” on Jonathan Hearsay’s blog is particularly interesting because he is a (sceptical) osteopath. He says “Chiropractors are seemingly hell-bent on destroying themselves as a therapy”.
There are now so many allusions on the web to the behaviour of the New Zealand Chiropractors’ Association Inc that I’ll give up trying to list all of them. Their action seems tp have done much to damage their own reputation.
Shortly after this came the news that the British Chiropractic Association is to sue one of out best science communicators, Simon Singh, because he had the temerity to inspect the evidence and give his opinion about it in the Guardian. His original article has gone (for now) from the Guardian web site, but as always happens with attempts at bullying and intimidation, it is more easily available then ever, For example here, and here.
Chiropractic in the UK is analysed by Andy Lewis on Quackometer,