, , , , , , , , , ,

Now, here’s a question for you. Say you had a care home with forty people, staff and residents, and you want to see if anybody in the group is exhibiting Covid-19 symptoms or has had and survived Covid-19. Assume we have accurate tests for these scenarios: the do you have the virus? viral test, based on a sample from your respiratory system e.g. a nose or throat swab; and the have you had the virus? antibody test, based on analysis of your blood. What’s the most efficient way of applying the tests?

Let’s take one of these tests – the post-Covid-19 antibody test. One solution is to line everyone up, take a blood sample from each person, send the forty samples away for analysis and await the results (which can be anywhere from one day to one week, maybe longer). This individual testing approach would require forty tests and cost forty times the cost-per-test (quoted at £59 here). I don’t know nor can I find out how this cost breaks down. There will be a percentage allocated to test kit manufacture, distribution, sales, application, analysis and profit and I would hazard a guess that a significant part of the cost is with the analysis.

Is there a cheaper solution? What if we could combine a portion of the blood samples of all forty people and apply one analysis to the resulting pool to see if the SARS-CoV-2 antibody is present in the pool? If the antibody is not present (a negative result), that’s forty people cleared immediately with just one test. If the test proves positive, we could split the group into two groups of twenty, group A and group B, combine portions of the blood samples of one of the sub-groups, group A say, and repeat the test on that sub-group. If group A tests clear (negative), we know the post-Covid-19 person(s), if there is one, is in Group B and so we continue with this sub-group, splitting it into two smaller sub-groups, B1 and B2, containing ten people in each. If group A tests positive, we know that at least one post-Covid-19 person is in that group. Thus, we would split group A into two further sub-groups of ten people per group – groups A1 and A2 – and repeat the process until, eventually, after six tests, identify the post-Covid-19 person(s).

This isn’t new. The process I outlined is called binary chopping (continuous division by 2) and is a common search technique within computer search algorithms. In normal life, you may have used the technique in the child’s game of Guess the Number:

‘Guess the number I have selected between 0 and 100?’ (Assume 31)
‘Is it between 0 and 50?’
‘Is it between 0 and 25?’
‘Is it between 26 and 37?’
And so on.

You’ll identify the number to be 31 in seven questions.

Another example of binary chopping occurs when searching for a word in a dictionary. First find the first-letter section; then the second letter section within the first letter section; then… You may not realise it but if you are a regular user of a printed dictionary, you have binary chopped your way through that dictionary many times!

But it gets more complex when it comes to reducing the number of tests for Covid-19 antibody testing. If a significant number of people have been infected and survived, you may not be able to eliminate half a group as clear and thus you’ll finish up with what eventually amounts to tortuous individual testing. If the infection rate is low however, we need to make sure that the blood pooling technique works i.e. that the presence of all the uninfected blood samples doesn’t mask the detection of the antibody in the post-Covid-19 blood sample. These, and other limitations, can be pursued in this article from researchers in the University of Rwanda in Kigali. It was this article that prompted my interest. Rwanda, and other African countries, is more drawn to pooled testing techniques for testing large groups from densely-populated areas, such as a city, where medical test facilities may be low but where the rate of infection is still relatively low. It’s a neat idea.

Back to binary chopping: one last thought. Consider this problem. I give you twelve seemingly identical coins and tell you one is counterfeit and, as a result, either lighter or heavier (you don’t know which) than a genuine coin. I also give you a weighing machine and ask you to identify the counterfeit coin. How many weighings will it take? Binary chopping says you will do it in a maximum of four if you know the weight of a genuine coin: 6 -> 3 -> 2 -> 1, but it can be done in a maximum of three weighings without knowing the weight of a genuine coin and using a balance rather than a weighing machine! How?

The detailed answer to this question is beyond the scope of this post but the problem and solution is a classic mathematical conundrum known as The 12 Coins Problem:

I have a lot of history with this problem. I first encountered it when I was 15 or 16 and still at school. Mathematics was starting to click with me and I spent hours thinking about the solution and doodling away on the backs of many envelopes. Eventually, in a note inserted inside a school library book on mathematics, I found reference to a 3-weighings solution based on the ternary (base 3) number system. This solution appeared in a paper published in 1946:

F J Dyson and R C Lyness, The problem of the pennies,  Mathematical Gazette, V. 30, p. 231, Oct. 1946.

There are other ways based on logical reasoning to solve this problem. If you’re interested, take a look at Wikipedia’s article.

My point is this. The researchers at the University of Rwanda are looking at using binary chopping on population groups containing many thousands of people including those added to the group via contact tracing.  When the group size starts to head up to millions, rather than thousands, such as could occur in India for example, it might be that a search algorithm based on ‘ternary chopping’ will be just as efficient but less costly than binary chopping.