Afghanistan Analysts Network – English

International Engagement

Paint It Pink: The US redefining ANA success

Gary Owen 14 min

Twice a year, the United States Department of Defense (DoD) reports to the Congress and Senate committees on defense, appropriations, and foreign relations on the ‘progress’ of the conflict in Afghanistan. AAN guest blogger, Gary Owen, a former soldier who now is working as a civilian development worker(*) has been scrutinizing the reports, teasing out, in particular, what they say about the Afghan National Army (ANA), so beloved of ISAF. He argues that in the face of ISAF’s bulldozer of a PR campaign to portray the ANA as ready to shoulder the nation’s security, it is critical to look at the metrics. He finds that, since January 2009, the DoD has changed its standards so regularly that it has become practically impossible to measure the ANA accurately. The DoD and, indeed, ISAF, are spinning ANA success. Note from the editors: some of our readers might find this blog a little technically challenging. We urge you to press on to the end as it is an eye-opener of a read.

The DoD has been required to regularly submit two reports on the progress of the conflict in Afghanistan since the National Defense Authorization Act of fiscal year (FY) 2008. One report is concerned with the progress made by ISAF, the Afghan National Security Forces (ANSF) and Provincial Reconstruction Teams (PRTs), on counter-narcotics, the fight against corruption, regional considerations and ‘performance indicators and measures of progress toward sustainable long-term security and stability.’ The other is the annual plan for ‘sustaining’ the ANA and the Afghan National Police (ANP). Together they are usually referred to as the ‘1230 report’ or simply the ‘1230’.(1)
So twice a year, apart from annexes deemed particularly sensitive, the public can get to see how the DoD is charting its own progress.(2) If you can wade through the figures and noun-strings of jargon, these reports are extremely revealing, interesting in what they say and do not say about the war in Afghanistan. Here, for example, you can find the public record of how the DoD thinks the ANA is doing – and this will be the subject of this blog.

The ANA is important. Huge Afghan and international efforts have focused on turning it, in a matter of years, from a nearly inept fighting force into one which it is hoped will be capable of securing the nation. The ANA is critical to ISAF and US strategy in Afghanistan. In the war of narratives, ISAF has also been putting a huge effort into a highly focused messaging campaign to convince everyone that the ASNF, and especially the ANA, is getting better. (This extends, for example, even to ISAF’s Tweeting: @ISAFMedia has the hashtag #ANSFCanDo which tweets any story which supposedly shows ANSF doing anything. The clumsiness and unintended comedy of these Tweets make them worth following). However, nowhere is this trend to talk up the ANA more evident than in the 1230 reports.

Before getting into the detail of what successive 1230 reports have claimed for the ANA, a few points are worth noting up front:

• In the DoD’s reporting on the ANA, it consistently changes the standards: in what is typical of nearly every other measurable metric when it relates to progress in the 1230 reports, the DoD keeps changing how it defines success. (On this subject, also see a CCN blog here.) In addition, due to the regular change in commanders at multiple levels, ISAF is able to regularly claim that the previous report’s statements of poor progress (when they have been made) were due to some procedural or reporting issue, but now they have addressed that particular issue and everything, going forward, will be better.

• What the DoD has to say about the ANA makes it clear that what ISAF says about the ANA, in most situations, while technically true, is spin. It seems clear to me that this spin is intentional, phrased in such a way to paint what is actually a false picture of how well the mentoring of the ANA is going.

• None of the information shared in this post is confidential: all the documents were obtained via the internet from the DoD’s own website. This is the most intriguing aspect of these reports. It is an indication of how little the American public pays attention to this conflict and the reporting of it, that these reports don’t raise more of any outcry, since they often contradict themselves or show a consistent pattern of changing standards that can only be explained as the DoD and ISAF working to paint things in a better light than reality.

One of the terms that will make its appearance throughout the article is the word kandak, which is the Afghan army’s term for their battalion-sized elements. According to the most recent 1230 report, there are 156 kandaks in the Afghan National Army. Progress at a kandak level is essential, since in modern conventional warfare, the kandak/battalion of approximately 400-600 soldiers is the smallest unit that can operate independently. This is due to the fact that that support assets are allocated no further down than the kandak level: subordinate units all rely on the kandak for their support activities. Granted, the kandak cannot operate indefinitely without the support of higher level elements, but in the execution of warfare, how the kandak goes, the army as a whole also goes.

So how many of those kandaks are able to operate effectively is key for any analysis of the capability of the army as a whole to function, and the DoD rightly focuses its measurement efforts at the kandak level. Higher and lower levels of the army also receive mentoring, but the primary effort is focused on the kandaks.

One of the key questions that needs to be answered as Afghanistan faces completion of transition in 2014 is how many of those kandaks are able to operate on their own. Notice that I do not use the word ‘independently’ since this is a word that ISAF consistently redefines – as will be seen below – and using it here would mislead the reader. What follows is a distillation of how the DoD has sought to present progress on the ANA and its kandaks over successive years.

June 2008 

The first 1230 report was released, with the DoD reporting the following on the state of battalions (kandaks) in the Afghan National Army:

As of March 2008, the ANA reported one battalion and 1 Corps HQ as rated at Capability Milestone (CM) 1: capable of operating independently.

Note the use of the word ‘independently.’ This is the highest ranking that the DoD/ISAF could give. In that time the four ‘Capability Milestone’ (CM) ratings were defined as follows:

CM1: capable of operating independently.
CM2: capable of planning, executing, and sustaining counterinsurgency operations at the battalion level with international support. (emphasis added)
CM3: partially capable of conducting counterinsurgency operations at the company level with support from international forces.
CM4: formed but not yet capable of conducting primary operational missions. 

The emphasis had been added to the ‘CM2’ rating because how DoD defines international support becomes critical in later 1230 reports.

January 2009

2009 was a bit of an aberration, as the DoD apparently released three reports on the progress of the war. In January, the DoD gave the following assessment on the state of the Afghan National Army at the kandak level:

As of November 2008, the ANA had seven battalions and one brigade and one corps headquarters rated at Capability Milestone (CM)1: capable of operating independently.

The definitions for the CM ratings had not changed, so this was a definite sign of progress. Since the previous report, six more kandaks had become ‘capable of operating independently.’

June 2009 

This report offers little in the way of revelatory information, except to show that more kandaks (or kandak equivalents) were moved up into the CM1 category. By that point 29 kandak/kandak equivalent units had achieved a CM1 rating.

October 2009

By mid-2009, it had appeared that steady progress was being made in developing the ANA so that it could assume full security responsibility in 2014. However, in October of 2009, the first signs of trouble started to appear in the DoD progress assessments:

Improvements in command and control, training, and equipment readiness have increased several ANA units’ CM ratings. However, it is important to understand fully what CM ratings measure and indicate. CM ratings simply depict the manning, training, and equipment of a unit. The correlation between CM ratings and operational capability to complete assigned missions is tenuous, and thus attempting to draw the conclusion that a CM rating is an indicator of the capacity for success in operations can be misleading. The actual ability of the ANA to plan and conduct operations with the assistance, and eventually without the assistance, of the international community remains to be seen. We expect to be able to assess the impact of the increased emphasis on partnered operations and embedded training by GEN McChrystal in the next reporting period. (emphasis added)

Up until the October 2009 report, nowhere in any of the 1230 reports was this caveat made, and this kind of explanatory language began to figure heavily in succeeding reports. It seems the warning was made because the ISAF leadership had realized that training was not going as well as they had been reporting and they wanted to prepare their readership for the next report being not as favorable. The October 2009 report was released around the time that ISAF stood up the NATO Training Mission – Afghanistan (NTM-A), which formally began operations in November of that year. The decision to stand the NTM-A up was made in April 2009, with an intent to focus ANSF training efforts under a coherent command. Until that point, training had been, if not haphazard, then poorly organized, and it was decided to concentrate all the efforts through the NTM-A and under the command of LTG William Caldwell.

April 2010 

The following assessment was presented to Congress and the public:

As of May 2009, 22 ANA combat units were CM1, 14 were CM2, and 14 were CM3. As of March 2010, 22 ANA units were CM1, 35 were CM2, and 28 were CM3. The slow progress in ANA kandaks achieving CM ratings over the last year has multiple causes, many which [sic] have been described above. (emphasis added)

Until this point, the DoD and ISAF had been reporting steady, multi-unit increases in the number of kandaks that were rated as CM1. Now suddenly, over the previous year, between May 2009 and March 2010, no fresh kandaks had been assessed as achieving the top rating of CM1. There were shifts in the lower rankings, but the key rating all kandaks need in order to be fully capable of transition is CM1, and that number remained flat. The DoD had an explanation for that:

High attrition and low retention have resulted in a large number of new personnel cycling into units. Additionally, many of the units that are not achieving CM1 ratings are in the south with the 205th Corps, which has had increased operational tempo. Ongoing combat operations since January 2010 have had a negative rate on manning, equipping, and training in these kandaks, which caused a downgrade in CM ratings. Finally, throughout the entire ANA, there is a shortage of trained and competent leadership in the officer and NCO corps that has affected the quality of the kandaks.COMISAF’s implementation of embedded partnering should help counter some of these negative trends in the upcoming months.(emphasis added)

That last line is almost identical to the ‘way forward’ statement given in the previous 1230 report, that there were issues, but that McChrystal’s plans were going to fix everything. (Reading on, we will see that this was not to be the case.) In Apri 2010, the DoD also added another important caveat about the ‘CM’ ratings:

It is also important to note that the current CM ratings look only at the manning, training, and equipping of a unit, so a combat unit can be operationally effective without necessarily being rated at CM1. COMISAF is looking at alternatives to CM ratings. An overall assessment of district security forces would provide a more comprehensive look at the development of both the ANSF and the security situation in a district.

During the author’s own experience of serving in the military, if a unit, particularly a battalion sized element like a kandak, was not properly manned, trained, or equipped, its operational effectiveness would be hindered a great deal. This statement is also in direct contradiction to previous reports. In October 2009, for example, the reader was warned that a CM1 rating might not mean a kandak was operationally capable. Now, six months later, in the April 2010 report, we are advised against assuming that a lower ranked unit, even if it is not properly manned, trained, or equipped, might not be capable of conducting operations.

None of this makes any sense from a practical military standpoint. If evaluation ratings aren’t accurate, then ISAF/NTM-A needed to either a) change the ratings, or b) make sure a unit fits the rating it has been given. The DoD actually managed to do quite a bit of both in the coming reports.

November 2010

After realizing that the ‘CM’ system wasn’t necessarily meeting their evaluation needs, the DoD and ISAF came up with a new rating system in the November 2010 1230 report:

Since the previous report, IJC changed the ANSF assessments process from the Capability Milestone (CM) Rating System to the Commander’s Unit Assessment Tool (CUAT) and Rating Definition Level (RDL) system. The new rating systems measure operational effectiveness and readiness in comparison to the CM Rating System, which measured the preparedness. (emphasis added)

The description of how and why the assessment has changed is likely to baffle all but the uninitiated – those who are hard-core, please read the text of the report below(3) – but basically the report writers say that the purpose behind the change is to more accurately reflect the actual capabilities of the kandaks.

The report then goes on to say that:

RDLs [Rating Definition Levels] are specific, measured ratings with clear definitions that assess areas not considered under previous rating schemes. The RDL system differs fundamentally from the CM rating methodology, and therefore the two systems should not be compared.The RDL definitions are described in the attached unclassified annex. (emphasis added)

Despite this, the ‘CM’ system was still used to assess larger elements of the ANSF.
The November 2010 report says:

NTM-A/CSTC-A [NATO Training Mission-Afghanistan/Combined Security Transition Command-Afghanistan] uses the CM rating system to track the development of the MoD, MoI, ANA General Staff, and sustaining institutions and intermediate commands. An enhanced CM rating system has been developed for greater precision in measuring progress. The ratings CM-4 and CM-3 remain the same. Ratings CM-2 and CM-1 are parsed further into CM-2B, CM-2A, CM-1B, and CM-1A. 

Getting back to the new RDL system, the DoD was now ranking kandaks in the following way:

Independent: unit is capable of the full spectrum of its missions without assistance from Coalition Forces (emphasis added)
Effective with Advisors: the partnered coalition unit does not exceed a limited guidance role
Effective with Assistance: unit is capable of executing operations and providing regional security with varied partnered unit assistance
Developing: the unit is one whose capability is dependent on partnered unit presence and assistance
Established: unit is not capable of executing or sustaining operations even with partnered assistance

Using this new system, the number of independent kandaks (the old CM1 rating) had dropped to zero. This was a bit of a setback in the effort to transition security responsibility to the ANA (although this information did not make its way into any of the comms reporting). The lack of any analysis of this setback or an explanation of why it happened is another indication of the DoD’s unwillingness to be transparent about the overall transition efforts as it relates to the ANA.

April 2011

Currently, no units have been validated as ‘Independent.’

In other words, not a single kandak, brigade headquarters, division headquarters or corps headquarters was rated as ‘independent.’

October 2011

Six months later, one kandak managed to be rated as ‘independent’. However, rather than making sure more kandaks would actually become ‘independent,’ the DoD and NTM-A changed their definitions yet again:

Prior to the spring campaign, IJC [ISAF Joint Command] reviewed the definition of an Independent unit and concluded that the definition was too restrictive and would be difficult for any ANSF element to attain. As a result, IJC rewrote the definition of an Independent unit to reflect the reality that most ANSF force enablers will likely require long-term coalition assistance… In addition, the Independent rating was renamed ‘Independent with Advisors’ in order to emphasize that an assistance relationship must be maintained by coalition forces. The ratings in the next CUAT cycle will incorporate the new names for the RDLs.

April 2012

It was not surprising that, by April 2012, there was a great leap forward in overall (reported) ANA readiness:

The overall operational effectiveness of the ANA continued to improve during this reporting period… the number of units (kandaks) rated as ‘Independent with Advisors’ increased from 1 to 13.

There was no longer even the aim of a kandak becoming completely independent: the highest rating level that can be achieved is ‘Independent with Advisors.’ The consequences for reporting were remarkable: the marked increase in the number of kandaks/kandak equivalents now receiving NTM-A’s highest assessment rating. DoD/NTM-A attributed this to a few things:

Rating increases are attributable in some part to the change in the ‘Independent with Advisors’ RDL that reduced unit personnel and equipment levels from not less than 85 percent to not less than 75 percent. More importantly, rating increases are attributable to improved ANA performance and ability to plan and execute missions and maintain command and control of subordinate units.

So a unit that is ‘independent with advisors’ could now be short up to 25 per cent of its personnel and equipment and still be considered fully capable (with advisors) of conducting security operations. The DoD explanation that rating increases were due to ‘improved ANA performance’ was indeed a positive development, but in case that was not going to help the ratings improve enough, NTM-A had another solution – self-assessment:

Prior to January 31, 2012, the Validation Transition Team (VTT) was tasked by IJC with validating any unit that received a CUAT rating of ‘Independent with Advisors’ by the Regional Commands. The IJC procedure was to not report a unit assessed by the RCs as ‘Independent with Advisors’ until the VTT could validate the rating. Instead, units would remain rated at the ‘Effective with Advisors’ level until the validation was complete. However, after January 31, 2012, the requirement for outside validation for newly reported ‘Independent with Advisors’ units was eliminated, which has resulted in the recent increase in ‘Independent with Advisors’ units.The new process places greater emphasis on the ratings from the units partnered with the ANSF, who have first-hand knowledge of the unit’s performance. In the future, there will continue to be increases in the number of independent units, although this is expected to be at a more gradual rate. (emphasis added)

Rather than having the rating of the partnered unit validated by an outside body, ISAF/NTM-A was now allowing the international units training their ANA counterparts to assess the effectiveness of those units (and by extension assess the effectiveness of the training they were conducting).

This is unheard of in military operations. Every deploying unit headed to Iraq and Afghanistan conducting combat operations has had to undergo some kind of validation exercise. What that means is that the deploying unit would go to either the Joint Readiness Training Center at Fort Polk, Louisiana, or the National Training Center at Fort Irwin, California. There, they would undergo a validation exercise, overseen by observer controllers at those installations. If a brigade element was considered unprepared for deployment, that could mean a variety of things, but usually was a certain way to be relieved of command.

So even in the United States’ own military, which is widely acknowledged to be among the best trained and equipped in the world, the practice of self-evaluation for readiness is not done. How, then, in an environment like Afghanistan, with all of its constraints and difficulties, would it make any sense to make that practice a policy?

The answer is simple: it is imperative that the DoD/ISAF/NTM-A demonstrate progress in transitioning security responsibilities to the ANSF, and in this case, the ANA. The old system, whereby the units would be evaluated by an external element, was obviously not yielding the progress necessary to complete (or be seen to be completing) that task successfully. Implementing the change in who assessed the kandaks, however, had resulted in immediate, positive results.
Rather than implementing what would be necessary to ensure real ANA success — more trainers, extending the timeline past 2014, more security funding – ISAF/DoD/NTM-A continues to revise its definitions for success, moving the bar ever lower in an attempt to demonstrate that the ANA are indeed ready to assume security duties.

It is not unusual for any organization facing the level of difficulty that ISAF does to engage in a certain amount of spin when reporting information. However, in today’s Afghanistan, such spin is going to result in some potentially disastrous results. The reader should bear in mind that this blog has not addressed the other challenges facing the ANA: recruiting, attrition, retention, and logistics. But those issues, coupled with the evident lack of training capability demonstrated by the DoD’s own reporting mechanisms, leads to the conclusion that the ANA of 2014 will not be prepared to secure the country of Afghanistan. And that is something no manner of ‘spin’ will be able to address.

(*) Gary Owen is a civilian development worker who has spent the last three years in Afghanistan, working in Ghazni, Gardez, Khost and Kabul provinces. Previously, he spent 21 months in Iraq on two different deployments: in 2004, as an infantry officer in Taji, and, in 2008, as a civil affairs officer in Tikrit.

(1) The specific sections are 1230 and 1231 of that year’s authorization, which direct the President of the United States and the Secretary of Defense in the following manner:

Section 1230 – Directs the President to report semiannually through FY2010 to the defense, appropriations, and foreign relations committees on progress toward security and stability in Afghanistan. Outlines matters to be included in such report, including: (1) strengthening the NATO International Security Assistance Forces; (2) Afghanistan National Security Forces capacity-building; (3) provincial reconstruction teams and other reconstruction and development activities; (4) counter-narcotics activities; (5) aid in fighting public corruption; (6) regional (geographic) considerations; and (7) performance indicators and measures of progress toward sustainable long-term security and stability. Directs the Secretary to supplement each required report with regular briefings to such committees.

Section 1231 – Directs the Secretary to submit to the defense, appropriations, and foreign relations committees an annual plan for sustaining the Afghanistan National Army and the Afghanistan National Police of the Afghanistan National Security Forces with the objective to ensure and maintain long-term security and stability in Afghanistan.

(2) For the sake of simplicity, the report is referred to in this blog as either the ‘1230 report,’ or simply the ‘1230,’ even though most of the blog focusses primarily on the 1231 section of the report, ie the sustainment of the ANSF. The DoD did release other progress reports prior to the implementation of the 1230 series of documents, but what the 1230s offer is the ‘public record’ of the DoD and, given the leading role of the United States in this endeavor, indeed all of the North Atlantic Treaty Organization’s (NATO) International Security Assistance Force (ISAF) efforts in attempting to stabilize the country of Afghanistan, this is important.

(3) The CUAT assesses the ANSF using qualitative methodology that is underpinned by quantitative data. The RDL system allows for the subjective evaluation of capabilities or functional areas in ANSF units, as well as objective evaluation of status report information, such as personnel, logistics, and training data and statistics. The CUAT enables the Coalition force partner to address the challenges, strengths, weaknesses, opportunities, and threats, in a narrative report. In addition, the CUAT addresses systems and doctrine, as well as simple status problems, and encourages problem solving within the chain of command. The CUAT provides inputs to a common database, thereby providing open access for analysis.