LOB-vs
Download Lectures on Biostatistics (1971).
Corrected and searchable version of Google books edition

Download review of Lectures on Biostatistics (THES, 1973).

Latest Tweets
Categories
Archives

Back to my old interest in medicine that doesn’t work. And the fact that there is such a huge market for medicine that doesn’t work is, in large part, a result of regulators who fail to regulate.

One example of a regulator that fails to protect the public from health fraud is the Complementary and Natural Healthcare Council -the CNHC (known colloquially as OfQuack). I have history with them, having offered my services to them, and, astonishingly, been accepted. There I found a lot of well-meaning but poorly-educated people: they had no idea about what constituted evidence, and very little interest in finding out. Of course, I was asked to resign and when I declined to do so, I was fired.

AI reports on health claims

That’s one reason that I was happy to play a small part in a recent study of the health claims made by people who are registered with the CNHC. [link to the paper -it’s open access]. Registration is voluntary: anyone can practise as a naturopath or reflexologist with no need for qualifications and no check on their activities. But if they do choose to register with the CNHC they’ll appear on their website, and that confers on them a (spurious) sort of respectablity.

In the study, artificial intelligence was asked to identify false or misleading claims made on the web sites of practitioners who were registered with the CNHC. Only the practitioners of the most obviously pseudoscientific subjects were included (Alexander Technique, Aromatherapy, Bowen Therapy, Colon Hydrotherapy, Craniosacral Therapy, Healing, Kinesiology, Microsystems Acupuncture, Naturopathy, Reflexology, Reiki and Shiatsu).

The AI assessed text from 11,771 web pages, scraped from 725 web sites. It identified false or misleading claims in 704 (97%) of the websites. The complete list of 23,307 claims identified by AI, with the reasons that AI provided, can be downloaded at https://osf.io/hnuqs (better not try printing it out -it has 2370 pages).

A few of the web pages were read by four humans, and the humans identified a comparable number of false or misleading claims to that found by the AI.

What does this mean for AI?

It seems that the AI did a good job in identifying false and misleading claims. That surprised me because AI has to learn from what it finds on the web. And the information on the web about reflexology, craniosacral therapy and naturopathy etc, is almost all nonsense -just sales talk. The fact that AI wasn’t fooled by this shows that the information that was used to train the AI came, almost entirely, from reliable sources, The sources of training material are a human choice. So the finding that AI can judge that reflexology is nonsense is actually a result of the human intelligence that decided which sources were reliable and which were not.

I’ve tried asking AI several real science questions and I’ve been very disappointed by the answers. The AI showed no ability to judge between different, but legitimate, points of view. For example there is a lot of good material on the web about statistical inference (but also quite a lot of mediocre stuff).

When I asked about the interpretation of p values, the first answers were quite accurate but entirely frequentist. When I pointed out that likelihood and Bayesian approaches are often critical of the frequentist interpretation, the AI apologised and tried again (the mock humility of the apology was particularly horrid). After a few more prompts, the AI came to a view of the problem which was quite close to mine. That doesn’t mean that my view is right. It means that, with helpful prompts, AI can be led to almost any conclusion you wish.

Once again, the biggest influence is the human intelligence that guides the AI. The computer algorithm showed NO understanding of the problems. It spat out a version of the material that was used to train it. It was at the level of a mediocre undergraduate. The only way to detect that it was AI was that it answered in better English, with fewer spelling mistakes than most undergraduates can manage. That makes its use doubly dangerous.

Implications for regulation of alternative medicine

It’s been obvious for a long time that the regulators of alternative medicine fail totally to protect the public from false and misleading claims being made about the products that are being sold to the public. That is shown abundantly by this paper. The number of false or misleading claims that were found on the web is inevitably an underestimate of the real number. AI could count only claims that were made in public, on the web. It couldn’t count the number of false claims that are made in private conversations, nor the number made in printed advertisements.

It’s clear from the huge number of false claims made by CNHC registrants that it would take a lot of time and effort for the CNHC to fulfil its duty to protect the public from them. The benefit of using AI is not that it’s any better than humans at detecting false and misleading claims. The benefit lies entirely in the speed with which it can do the job. That’s why the senior author of the paper, Simon Perry, offered the use of his methods to the CNHC. So far they have shown little interest in accepting that help. All that I can infer from this is that the CNHC has little interest in the false claims made by its registrants, as long as they keep paying the registration fees. It hasn’t changed much since I was part if it, 14 years ago.

Conclusions

AI looks quite promising for some problems where the task is limited and the rules are clear. Learning to spot cancerous tissue in radiographs, when trained on enough radiographs in which the human diagnosis turned out to be accurate. AI also did a good job in detecting false or misleading claims by practitioners of various sorts of alternative medicine, though this reflects the (human) choice of reliable sources of information. The speed with which this can be done should be useful to regulators who are interested in stopping the spread of misinformation. Sadly most regulators show very little interest in doing that part of their job.

Ai seems to have done a good job in predicting protein folding. That is a very restricted task, and there is a lot of (mostly) reliable information to train on. There is a lot of hype about its use for drug discovery, but the training data for that job is often crude and inaccurate.

But general-AI has a long way to go. It has no understanding of what it says. At the moment it’s mainly a tool for crooks, fraudsters and charlatans whose intent is to deceive.

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.