Image by Joshua Masinde | CIMMYT
  • Blog
  • 7 February 2023

The DAC debates: why aid measurement matters for development

The way we measure aid affects the type and quantity provided as well as our perceptions of it. Why are the DAC's rules controversial?

Written by Euan Ritchie

Senior Development Finance Policy Advisor

According to the Organisation for Economic Cooperation and Development’s (OECD) Development Assistance Committee (DAC), its members gave US$186 billion in Official Development Assistance (ODA) in 2021.[1] However, recent decisions by DAC members about how aid should be counted have provoked controversy and this figure is highly contested. The details are often technical and might seem far removed from the important work of meeting the Sustainable Development Goals (SDGs). But these rules are important: they affect how we think about wealthy countries’ generosity, and if they do not accurately reflect 'donor effort' – the concept they are intended to measure – they risk creating incentives that distort donors’ behaviour or reduce the total amount that they give. This blog reviews some of the recent controversies and argues that for more effective rule-setting, statisticians at the DAC should have more authority, and discussions about aid rules should include those that aid is supposed to be helping.

Debates intensify on a range of topics

For as long as wealthy countries have given aid, there have been critics of how that aid is measured.[2] However, in recent years the clash has intensified between civil society, think tanks and independent analysts on the one hand, and the rule-setters on the other, i.e. DAC members. Strongly worded letters by critics have been written to national newspapers and the OECD, and equally strident responses have followed. Civil society organisations (CSOs) have expressed concern over topics including whether excess vaccine donations, Special Drawing Rights (SDRs) and private-sector instruments should be counted as ODA; research organisations have hosted discussions and workshops; and ex-DAC statisticians have published critical academic articles.

The individual issues at stake are varied. One frequent criticism is that the proportion of each loan counted as ODA is higher than it should be given the actual fiscal effort involved in lending, and numerous studies back this up quantitatively. Another relates to the inclusion of debt relief on ODA loans, which critics say double-counts the risk associated with ODA loans and gives rise to bizarre accounting anomalies. Relatedly, the DAC has not yet managed to resolve how private sector instruments should be counted, and consequently, there are currently two separate ways of doing so that give different results. Including such instruments at all has been criticised as it appears to abandon the concessionality requirement, supposedly core to the definition of ODA.

This list is far from comprehensive: further controversies include in-donor refugee costs, loans of IMF’s Special Drawing Rights, migration-related ODA, and many others. Some of these issues are mostly a concern for purist statisticians: for example, it is odd to mix up cash flow and grant-equivalent measures (which are quite different concepts), but it is hardly a matter that advocates will rally around. Others have been a major complaint of civil society for years, such as the amount of ODA that is counted as in-donor refugee costs, or debt relief on export credits which were neither developmental in nature nor concessional in the first place, each of which has accounted for more than 10% of ODA in previous years.

This is not just bean-counting: three key reasons for caring about the rules for measuring aid

They affect how we think about aid and its effectiveness:

Researchers frequently use aid data to rank donors on their generosity, or assess total flows to low- and middle-income countries. While knowledgeable users are able to navigate the details (DI has long produced work on 'unbundling aid' to provide a more nuanced picture), not all are so diligent, including those who want to undermine the case for aid. Such people point to aggregate aid numbers, and claim that the impact should have been bigger given amounts spent, without acknowledging that much of this aid is actually spent in the country giving aid. Development Initiatives analyses such 'non-transfer' aid and found that such aid accounted for around 14% of total ODA in 2021.

They affect the quantity of aid provided:

Many aid providers care about being seen to be generous, as measured by the ratio of ODA to Gross National Income (GNI). Rules that allow providers to count more aid for any given level of activity make it easier to meet targets while doing less. An extreme example of this is the UK, which has a target for aid which it treats as a fixed ceiling, meaning that the more ODA is counted on one transaction, the less will be spent elsewhere. While unique in this respect, the UK is not the only country to care about meeting a particular ODA/GNI target, and so the risk that lenient rules can lead to the displacement of genuine development activity applies to all countries. For some decisions, this could lead to billions of dollars of ODA being displaced.

They affect the type of aid provided:

When the rules exaggerate aid recorded on some transactions, it risks creating an incentive for donors to favour those transactions even if a different balance would be more appropriate. For example, researchers have pointed out that donors are able to claim significant amounts of ODA on loans that make them money on average. Given that, there is likely to be pressure from national treasuries to favour those transactions over grants, even in situations where the latter would be more appropriate.

For all these reasons, sound and robust statistical rules for measuring ODA are important, but many fear we are moving in the wrong direction. Why might this be the case?

The rules are set by diplomats from wealthy countries, not statisticians

Most important statistical measurements have a similar governance structure. A broad group of expert statisticians will draft a detailed reference manual (such as the System of National Accounts which forms the basis of GDP measurement) and – at least in countries adhering to best practice – this will then be implemented by independent, national statistics offices (for example, the Office for National Statistics in the UK).

By contrast, although there is expert input from statisticians, it is diplomats who have the final say on how ODA should be measured, with representatives from wealthy countries setting the rules. Discussions on these rules are led by the chair of the DAC, and a secretariat (with statistical expertise) prepares background documents that inform the discussions. But the rules are only finalised when the diplomats form a consensus, and inevitably, rules end up reflecting their collective interests. This might not be as big an issue if the DAC was characterised by broad representation, with the views of partner countries given equal weight. But that isn’t the case, and as such it is easier to agree on rules that make it easier to count more aid, rather than what makes sense from a statistical standpoint. If recipients, as well as providers of aid were setting the rules, it would make it no easier to reach a consensus, but the DAC would have more legitimacy and probably set better rules.

Both the DAC chair and the secretariat have a difficult job. Consensus is difficult to broker, as demonstrated by both discussions about the correct price at which to count vaccine donations as ODA, and how to measure ODA on private sector instruments. Often, initial proposals by the secretariat are far more in line with what external analysts believe is appropriate than the eventual rules that are agreed. For example, the secretariat previously stated that under the grant equivalent system no additional ODA should be counted on debt relief on ODA loans; a battle they evidently lost. In a recent discussion, outgoing DAC chair Susanna Moorehead stated that measuring ODA is “politics, not statistics”. She is right, but while this is the case accurate measurement will be challenging, and this matters for the reasons stated above.

With other statistics, attempts at tampering usually come from not following the rules, rather than the rules themselves. People may be sceptical of Uganda’s claim to be a middle-income country, or Nigeria’s claim to have overtaken South Africa to become the largest African economy in 2014, but the veracity of their claims is likely to be judged by how closely the relevant measurements followed international guidance. With ODA, by contrast, international guidance is seen as the problem.

This highlights that statisticians should have greater authority over setting the rules on aid measurement, as they do for most widely used statistics, and these statisticians should be drawn from all countries, not just wealthy ones. This would not free aid measurement from politics altogether. But it would restore credibility in the eyes of those who believe that the aid rules are self-serving.

This blog is part of a Development Initiatives series on debates concerning the rules for measuring official development assistance (ODA), and potential reforms to its governing body, the Development Assistance Committee. ODA is the most widely used statistic on international aid, and how we measure it matters.

Notes