On 14 November the Asia Foundation released its 2012 ‘Survey of the Afghan People’, based on data collected by ACSOR (Afghan Center for Socio-Economic Research), a Kabul-based research organisation that has done the data collection for almost all large publicly released opinion polls. It is the eighth survey in its kind: the first was released in 2004 and covered Afghanistan’s eight regions, the second released in 2006 covered all 34 provinces and since then the exercise has been repeated each year. The provided data, and the trends that can be read from them would be highly interesting if they weren’t so controversial. AAN’s Martine van Bijlert takes a closer look.
The divergence in the media coverage of the Asia Foundation’s findings – other than the simple copying of the provided press release – reflects the controversy that surround these opinion polls. CNN (‘Afghans positive about the future’) noted that in Afghanistan optimism is currently at its highest since the Asia Foundation’s survey began in 2004. The Guardian, on the other hand, chose the glass-is-half-empty angle, under the headline ‘Third of Afghans would leave country if they could, says poll’, reporting that:
Nearly half of Afghans think their country is not moving the right direction, fear for their family’s safety, or are frightened to run into a member of their police or army, according to a nationwide poll. Concerns about corruption are at the highest level in more than half a decade, and more than a third of people said they would leave Afghanistan if they could. Sizeable numbers also felt job opportunities were down and that electricity supplies had got worse. Although the proportion of people who think Afghanistan is on the right track rose from last year, the Asia Foundation, which carried out the survey, admitted it might be skewed towards positive opinions because they could not access Afghans in the most dangerous parts of the country.
Local news agency Pajhwok quoted a number of Afghan analysts who didn’t agree with the findings and suspected that they were politically motivated. Here is what they said:
Political analyst Hassan Haqyar: The survey does not indicate the current ground realities in the country, it is aimed to show the world that foreigners have been able to install an Afghan government acceptable to their nations. The Asia Foundation, headquartered in the US, conducts its surveys in line with American whims and wishes.
Political analyst Wahid Muzdha: A survey is a complex process in which a little bit of indiscretion can alter the results. In the current situation it is difficult to conduct such survey in an accurate fashion in rural Afghanistan.
Former Nangarhar MP Babrak Shinwari: There is a political aspect to the survey, that comes at a time when reconciliation efforts are being accelerated and a new security pact with the US is expected to be inked in the near future.
Two trustees of the Asia Foundation, Theodore Eliot and Karl Inderfurth, however disagreed. On the Foreign Policy website and under the heading ‘Punditry Aside, how do Afghans feel about Afghanistan?’ they described the surveys as a ‘valid, long-term barometer of the Afghan people’s views over time’.
Opinion polls in Afghanistan have been controversial for years – against the backdrop of a fierce battle over how to frame Afghanistan’s intervention and its supposed progress. The constant spinning by politicians, press officers, policymakers, diplomats and the military has led to a deep suspicion among analysts of anything that sounds more optimistic than you would expect. And that is exactly what the poll findings tend to be: surprisingly upbeat, even as the mood in the country seems to plummet, and also consistently more positive than the picture that emerges from any other kind of research – whether quantitative socio-economic indicators, the analysis of political developments, systematic qualitative research or anecdotal reporting. In particular during 2009 and 2010, when the findings of several different opinion polls were used to push a narrative of increasing levels of optimism, in the face of high levels of political tension and public disquiet, many serious analysts simply stopped paying attention.(1) But while most analysts on the ground have decided to ‘dismiss and ignore’ the surveys, the policymaking and press-shaping apparatus continues to ‘cite and celebrate’ the poll findings, in particular those that fit whatever point they were trying to get across.
The problem of coverage and sample representativeness
The fundamental problems facing countrywide polls in Afghanistan, in terms of coverage and representativeness, are obvious and include insecurity, lack of access and a potential general suspicion towards strangers. Analysts on the ground, for this reason, find it difficult to believe the polls’ claims of countrywide coverage and of an interview quality that ensures that respondents both understood the questions and answered them freely. This relates directly to the validity of the survey sample: a sizable and random sample only allows one to extrapolate the findings and to consider them a reflection of the opinions of the full population, if the sample is sufficiently similar to the make-up of the population. As the data provided in the polling reports show, there are actually serious issues with the sample composition.
The Asia Foundation, for the first time this year, also alluded to this in its executive summary (as picked up by among others the Guardian), but did so in a markedly understated way:
Just over half of respondents (52%) say Afghanistan is moving in the right direction, up from 46% in 2011. It is important to note, however, that as in other years some of the originally identified survey sampling points had to be replaced in 2012 for security reasons, thus respondents living in highly insecure areas (who might be more pessimistic about the overall direction of the country) are likely to be underrepresented. [underlining added] (2)
Eliot and Inderfurth in their opinion piece are clearer about the scope of the replacements, but minimise the likely impact in rather broad brush strokes that are not supported by the report’s data, saying that: ‘The fact that 16% of polling sites were not accessible for security reasons — potentially creating a bias — is taken into account and does not overturn the major findings.’
Polls in Afghanistan follow a multi-stage sampling method: first you determine how the sample should be distributed among the provinces and between urban/rural areas, based on CSO population statistics. Then you randomly select the sampling points (villages or urban areas) from available lists; at each sampling point usually 6 to 8 interviews are done. Then you select the interviewees through a ‘random walk method’ and a so-called ‘Kish grid’, which is a tool through which you randomly select the specific household member that is to be interviewed.
In 2012, 323 out of the 1055 sampling points had to be replaced because they were located either in insecure areas, were inaccessible for other reasons (logistics, weather) or simply could not be found. Details on which locations were replaced and for what reasons can be found on pages 193-208 of the report.
323 sampling points represent 31% of all sampling points. With other words, almost one in three of all people interviewed in this poll lived in areas that were either more secure or more easily accessible than in the original random sample. It is very difficult to argue credibly that this would not have affected the findings of the poll. (Apart from the two trustees, the Asia Foundation incidentally does not appear to be suggesting that the findings were not affected, but they have also made no effort to either hazard a guess in what ways this may have impacted the survey, or to seriously temper the enthusiasm with which their findings tend to be bandied around.)
This was the first times something like that was explicitly mentioned in the report’s summary, but it was not the first year that it was an issue. It has in fact been a significant feature since 2009, which also happens to be the year that opinion polls started registering increasing levels of optimism. Here is an overview of the number and proportion of respectively all replacements and of the replacements for security reasons over the last four years, showing that it has been quite a consistent problem:
Total sampling points replacements
– TAF 2009: total 208 out of 882 (24%) / 102 (12%) for security reasons
– TAF 2010: total 213 out of 885 (24%) / 138 (16%) for security reasons
– TAF 2011: total 166 out of 876 (19%) representing 1505 interviews (24%) / 95 (11%) for security reasons
– TAF 2012: total 323 out of 1055 (31%) / 168 (16%) for security reasons
Sampling points replacements for security reasons
– TAF 2009: 102 out of 882 (12%)
– TAF 2010: 138 out of 885 (16%)
– TAF 2011: 95 out of 876 (11%)
– TAF 2012: 168 out of 1055 (16%)
Additionally, there is the problem of non-responses and refusals, which every poll faces, where you manage to locate and reach the area and select the households you want to interview, but find them consistently not at home, unwilling to participate or unable to do the interview for other reasons. In 2012 this was the case 1608 times (see page 189). This is not necessarily an unusually high number, but together with the replacements it means that well over half of the current sample consists of different people than were originally selected.(3)
Not all replacements of sampling points or interviewees are significant, as not all of them are likely to lead to the inclusion of respondents with a consistently different outlook. But some of them are and if this is a large number it is problematic. This is most obvious when it concerns areas that are insecure, difficult to access, or not properly registered, but it could also affect communities with high levels of internal displacement or labour migration, or marginalised groups suspicious of strangers. Sometimes biases are discovered and fiddled with in an attempt to lessen their impact: when for instance the percentage of educated people or urban dwellers are found to be considerably higher than the CSO data suggests, the poll data is often ‘weighted’ (which is an accepted, but intransparent practice). But for other variables, such as level of insecurity, there is no baseline data available, which means that some biases are corrected while others are not.(4)
The impact of insecurity and inaccessibility in the survey were unevenly, and in some cases implausibly, spread over the country. In the Hazarajat and the Northeast, unsurprisingly, most of the sampling point replacements were due to remoteness, transportation problems, weather or because the village could not be found; in the East and Southeast and Southwest most replacements were due to insecurity. The Southwest, like last year, was found to be surprisingly permissible. With only fifteen sampling points replaced – 14 because the Taleban controlled the district or other security reasons (see page 199 of the report for details) – it represented only 5% of the total replacements and 8% of the replacements for security reasons, while making up 11% of the sample. There were, moreover, no replaced sampling points in Helmand and Uruzgan, two provinces with extensive Taleban activity (although to assess exactly how implausible this is, one would need to see the actual sampling points that were included in the survey).
How interviews are done
Apart from the question whether the sample is a reliable representation of the Afghan population, there is the issue of whether the survey findings reliably represent the opinions of the people who were interviewed. This relates to whether the interviews were conducted properly, whether the respondents understood the questions, whether they answered the questions truthfully and, finally, whether the questions were actually relevant to the subject at hand. Many analysts have in particular commented on the likely influence of ‘social desirability’, where respondents give the answers that they think are expected of them or that they believe are most likely to keep them out of trouble. It is in this respect relevant that by far most interviews took place with other people present as well (only 33% of the interviews were conducted with two people present, in all other cases there were more).
All of this is, finally, of course assuming that the interviews actually took place as reported. As any organisation involved in a countrywide activity – whether it is a polio vaccination campaign, the distribution of election awareness material, the payment of police salaries or voting on election day –knows, it is a continuous and uphill struggle to ensure that the work is done as instructed, and it is rarely as unproblematic, well-managed and monitored as reported.
The question therefore remains whether it is possible to conduct reliable opinion polls in a country with such high levels of insecurity, such difficult access, such low levels of education and so little reliable baseline data. Polling organisations and their commissioning agencies continue to dismiss these misgivings, providing detailed descriptions of all stages of the survey process, arguing that they meet the standards of rigorous and internationally accepted methodology, and pointing to the reach of their quality control measures. But the reasoning is somewhat reminiscent of all the diplomats and election watchers who argued (in 2005, 2009 and 2010) that electoral fraud could and would be managed by the fraud mitigation measures that had been put in place. As it turned out, quality control measures only work when they are actually implemented and upheld. And that is not so easy to guarantee.
(1) Consider, for instance, this sequence of headlines and quotes:
3 December 2007 (ABC/BBC/ARD): ‘Afghans “Still Hopeful” on Future’:
Most Afghans are relatively hopeful about their future, an opinion poll commissioned by the BBC has suggested. They also support the current Afghan government and the presence of overseas troops, and oppose the Taleban. But the poll suggests that Afghans are slightly less optimistic than a year ago, and are frustrated at the slow pace of reconstruction efforts.
27 October 2009 (Asia Foundation):
In 2009, more respondents say that the country is moving in the right direction and fewer say it is going in the wrong direction than in 2008, signaling a check on the trend of declining optimism that had been evident since 2006.
11 January 2010 (ABC/BBC/ARD): ‘Views Improve Sharply in Afghanistan, Though Criticisms of the U.S. Stay High’:
Hopes for a brighter future have soared in Afghanistan, bolstered by a broad rally in support for the country’s re-elected president, improved development efforts and economic gains. Blame on the United States and NATO for violence has eased – but their overall ratings remain weak.
3 September 2010 (ABC/BBC/ARD): ‘Optimism Holds in Afghanistan; Support Grows for Talks with Taliban’:
Just-released polling data finds that optimism among Afghans remained surprisingly durable in the first half of 2010.
1 April 2011 (Craig Charney and James Dobbins): ‘Afghanistan’s reasons for optimism’
And the list goes on. All of these polls were, incidentally, based on data collected by ACSOR, a Kabul-based subsidiary of D3 Systems Inc. Data analysis (and narrative peddling) were done by either Charney Research or Langer Research Associates. This near monopolisation of the field of Afghanistan’s – publicly available – opinion polling makes it more difficult to compare and verify the validity and replicability of the findings.
(2) An occasional paper drafted for the Asia Foundation by this author in late 2011 may have contributed to this addition. The paper discussed the 2011 poll’s findings on reconciliation, but also explored the possible limitations of the findings in general and pointed to the relevance of the large number of sample replacements. The paper remains unpublished to date.
(3) Given that each sampling point is said to represent 6 interviews, the replacements impacted 1938 interviews. This brings the total of interviews that had to be changed because either the area could not be reached or because the person could not be interviewed to 3546 out of a total of 6290.
(4) See also the April 2012 piece by AREU’s Oliver Lough in response to the Asia Foundation’s Election Perception Survey. During the survey 65% of the respondents said they had voted, while 34% said they had not. Although for many reasons we will never know how many people really voted in 2010, nobody would suggest that it had been anywhere near 65%. The official IEC estimate was 40%, but this was not corrected or caveated for fraud-induced inflated numbers. It is not clear whether this overrepresentation of people who said they voted is due to a ‘safer area bias’ (although in safer areas turnout was often low as well), the ‘social desirability’ effect on the part of the interviewees, or some overenthusiastic massaging of the data somewhere along the line.
This article was last updated on 31 Mar 2020