For months, there has been unfettered and unified condemnation of opinion polls on the popularity of presidential candidates which has thrust local survey firms into the spotlight.
In what sounds like an oxymoron, both sides of the political divide have poured vitriol on recent opinion polls over their “inaccuracies” even as the formations needed reliable evidence-based data to inform campaigns.
At the heart of the sustained attacks and disdain of the pollsters’ findings are what are perceived to be inexplicable discrepancies in survey results, even those conducted during a similar period using near similar methodologies. One would ask: Why will the difference in one candidate’s popularity be so large among the firms even when the errors of margin are the same? How can one explain the fact that some pollsters consistently rate one candidate ahead while others show the same candidate trailing in their opinion polls all the time?
Yet, we must start from one known fact: If a quantitative study is well conducted scientifically, even by different firms, it should yield near similar findings, which can be easily generalised throughout the whole population. In essence, it means that sample surveys – like the renown Kenya Demographic and Health Survey - should yield findings that can be generalised to the entire population, as those that may arise from a census survey.
It, therefore, easily follows that the unexplained disparities arising from the political opinion polls deserve to be dismissed for failing to remain consistent even among the pollsters using similar study approaches. It can also be deduced that recent poll findings may not be used to predict voting patterns in the August 9 General Election due to their spurious and incoherent conclusions.
- If you fail to cast your ballot, you will end up with a bad government
- Lesson for us from disciples of Jesus' election to replace Judas
- Vote for upright leaders, vote for better tomorrow
- Why voting makes economic sense
Although many have attributed these questionable findings to ownership of the polling firms, researchers must rise above the proprietorial interference excuses and hold firms accountable for – either deliberately or obliviously – employing rookie and incongruent survey methodologies that fail to explain the intricate political phenomena and voting patterns of all the 42 tribes in Kenya.
Based on science, and an analysis of some of the comments made by some of the prominent pollsters in the country, one cannot avoid to aver that serious audits must start from the key researchers involved in drawing the methodologies of the surveys. The kind of gaffes they make in explaining their methodologies every time they are called on to face Kenyans on national television have always been mindboggling.
From the TV interviews, one cannot struggle to see that the pollsters start scrambling for the route to making wrong conclusions by employing wrong research designs. For instance, one of the pollsters recently showed a shocking lack of understanding of the differences between a cross-sectional research design, and longitudinal research design. By explaining that the pollster often returned to collect data from the same sample often times using a database established over a long period of time, the pollster fell into the trap of collecting echoed data akin to a cohort study. The pollster’s findings were unfortunately always reported as though they emerged from a trend study. This fallacious example is just one glaring reason we cannot parade some survey findings as true reflection of the voting intentions of the 21 million registered voters.
On the face of it, it would appear that some pollsters obliviously conduct hotchpotches of cross-sectional and longitudinal studies without being aware on the data processing. Failure to clearly anchor studies in the correct designs sets the pollsters up to the blunders in data collection that lead them to drawing wrong results. Worse, the pollsters miss the basic chance of accurately using probability sampling, which would easily insulate them against making ungeneralisable findings.
Although some pollsters give the impression that they utilise the quantitative research approach, some of them have ended up generating qualitative findings. Indeed, this exposes them to various questions since the many close-ended questions that they pose in their data collection tools do not generate any textual data.
One other scandalous admission of the pollsters that certainly drives them to biased results is their propensity to load the political opinion questions on other paid-for polls, some of which are meant to collect data on totally different purposes, including for commercial products. By doing so, the firms always box themselves up into the muddle of using the right sample for the wrong study. For a sample targeting to collect political views will not always be the same as one meant to collect views on tastes and preferences about types of soap, for instance.
Different surveys target respondents of specific demographics such as gender, age, region and education level, that do not necessarily have an interest in politics and would not even be registered as voters. By chaperoning respondents targeted for a study on soap to answer questions on politics when indeed their social interests would be different would likely lead to drawing of wrong conclusions, some of which many parties could be sneering at today.
Related to this is the issue of pollsters utilising the wrong sampling frame. Whereas the ideal sampling frame for the pollsters should be the 22 million registered voters under the Independent Electoral and Boundaries Commission, some of the pollsters have created their own computer-based databases, which clearly do not provide a true representation of the register. In so doing, the pollsters set themselves up to utiising the wrong study population to extract equally wrong samples that yield unreliable, unverifiable and invalid findings.
It is also of noteworthy that one of the most fallacious methodological flaws of the pollsters is their near assumption that Kenya’s electoral and regional areas comprise homogeneous population characteristics. In such cases, for instance, a pollster would pick six respondents from Kibra constituency using randomised sample selection criteria and generate data assuming that the constituency will be well represented in the survey.
Yet, given the cosmopolitan nature of Kibra’s informal settlements including Kisumu Ndogo, Makina, Kianda, Toi and Gatwikira, the six respondents may belong to one tribe that in one or more of the kijijis and, therefore, exclude other communities that could be leaning to a different political formation. The import is that findings will be utterly misleading based on Kenya’s tribal-based voting patterns.
One plausible conclusion, therefore, is that the best poll finding will be those from the census survey expected from the General Election booths on Tuesday. May the best formation win.
The writer is Turkana governor and director general of the William Ruto presidential campaign