+263 771930182/ +236 772929193 busdev@qpartnership.com

….. not just it’s size, but who is in the denominator? If they don’t tell you, make sure you ask. It matters.

It matters when you wonder how Zimbabwe has seemingly, thus far, escaped the worst of the COVID-19 pandemic when its neighbour to the south, South Africa, is ranked sixth globally, based on total confirmed cases. These numbered around 625,000 at 31st August (with 14,028 deaths recorded, representing 2.2% of those infected). Zimbabwe had 6,388 confirmed cases and 195 recorded deaths (3.0% of those infected) and is ranked 104.

It’s obvious that the denominators are different, in terms of the population size.  IN 2018 South Africa’s was estimated at 57.78 million and Zimbabwe’s 14.44 million.

So, does that mean 10.82 people in 1,000 (1.08%) have contracted COVID-19 in South Africa and just 0.44 per 1,000 (0.044%) have in Zimbabwe?  No, not unless everyone or a large random survey of people has been tested. Population size is clearly the wrong denominator, but I have heard these statistics being discussed and a plethora of hypotheses being suggested for Zimbabwe’s seeming resilience.

What about the number of PCR tests done as a denominator? Using the number of tests for each country, the percentage reported positive was 12.8% for South Africa and 7.2% for Zimbabwe (31st August).[1]  That would mean we expect that about 7.4 million people in South Africa to have contracted the virus and 1.04 million in Zimbabwe. Based on reported deaths from COVID, then the respective death rates (% dying of those who got infected) would be just 0.19% and 0.018%, not the 2.2% and 3.0% as calculated in the first paragraph.

But wait. What if PCR tests are selectively being carried out, on those with apparent symptoms and high-risk groups, such as health workers, travellers and those who have been in contact with someone who has tested positive? This is what is mainly being done in many countries. We cannot then extrapolate the data to the whole country. Not unless we do a large scale, random survey of the population. And there are different technologies for testing, too, not all equally reliable. Do we know which country is using which, or whether each country is even using one type of technology?

There’s another factor to consider. Are post-mortems being done on everyone who has died, including of people dying in remote rural areas, to establish and record cause of death? If not, we are likely to be missing some numbers – perhaps lots of numbers. So, do we then know the death rate? Or the infection rate? Can we compare countries, if their testing policies and numbers tested (as a proportion of the population) and representativeness of those tested are not the same or if we don’t even know them?

There are, of course, all sorts of different analyses of data being conducted with the above in mind, and all available on the web (with carefully worded caveats). The latest ‘worldometers’ data[2] that compared infection and death rates per capita in different countries indicates that Peru and Chile outrank USA and Brazil in terms of numbers of cases, and of deaths, per million people, with the former two countries recording 19,584 and 21,416 cases per million, respectively (and 871 deaths and 587 deaths per million). But then Qatar is listed with 42,303 cases per million, based on data that this country presumably supplied, and Bahrain, 30,150! Another respected site indicates that, as of end of August, the three countries with the highest death rates from Coronavirus are, in order, Peru, Belgium and the UK (894, 866 and 624 respectively).

These figures highlight the difficulties in comparing and ranking countries and in predicting the progress of the pandemic (and that’s without considering other influencing factors, such as the Coronavirus’s propensity to mutate).

So, what do we do? Keep trying to make sense of the data, if that is our job. And if it’s not, keep trying to understand it – especially the denominators – and to refrain from offering our own opinions and solutions if we are not qualified to do so. It’s hard enough for the experts to do. And, where it is within our capacity to do so, to keep pushing for the collection and reporting on high quality data and to question it, and question it again and again.

Look out for our next blog, in about a month, on non-random/convenience sampling and the potential bias introduced by on-line surveys.


[1] https://ourworldindata.org/coronavirus-testing

[2]https://www.worldometers.info/coronavirus/?utm_campaign=homeAdvegas1?%22%20%5Cl%20%22countries