Does weak state capacity affect learning outcomes? Ensuring a Learning India S4 E.002

State capacity in simpler terms is the capacity of state to implement policies. This post discusses if lack of state capacity affects the learning outcomes. It initially discusses studies citing correlations (not causality) between state-capacity and learning outcomes. It then moves onto more rigorous approach and then ends with an example to give a better sense of the effect in action. There is an actively ongoing debate on various forms of measuring state capacity which is beyond the scope of this post.

The first evidence is the ‘Letter grading government efficiency’ experiment. Measuring state capacity means that one should be able to measure the implementation of a policy. It is a challenge to find a uniform policy which is implemented across several countries. Fortunately, all countries subscribe to an international postal convention requiring them to return letters posted to an incorrect address. Taking advantage of this, authors posted letters to 159 countries, with incorrect addresses, two letters each to five largest cities in each country. They then tracked the number of letters that returned and the time taken to return. If all letters return, then it is a sign of stronger state capacity. If no letter returns, then it is a sign of weak state capacity and all the other countries in the spectrum in between. The top and bottom most countries are as below.



No letter returned from Somalia, while all letters returned from US, Finland and other countries. This table shows that more of the letters came back, and came back quicker, from countries with higher education than from those with lower education. This doesn’t mean causation though. Also, having a state capacity is different from using it in education. One can be good at delivering letters but not be so at implementing other policy. With all these limitations and questions, this correlation gives an interesting insight.

The second evidence is the ‘Institutional Quality’ metrics. Countries are divided into 4 clusters based on their legal, political and economic institutions.




“The quality of institutional proxies within each group is best examined when comparing the average values for each proxy between clusters for legal, political and economic institutions, respectively. Based on that, we can also interpret the average cluster characteristics. Cluster 1 is really bad, as it scores consistently around one standard deviation below the average in all three institutional groups. In contrast, cluster 5 is doing extremely well, as it scores consistently one or more standard deviations above the average. Cluster 4 is also good, with having most of the institutional proxies well above the average. The two more interesting clusters are clusters 2 and 3. In terms of the legal environment, cluster 2 scores poorly on the quality of courts and protection of property rights, and reasonably well on freedom of the press, civil liberties and interference of religion; the opposite goes for cluster 3. In terms of the political environment, cluster 2 and 3 mostly have average scores, with the exception of cluster 3 doing very badly on the autocracy versus democracy measure, checks and balances and democratical accountability, but very well on the corruption measure. Finally, in terms of economic institutions, cluster 2 is bad, with scores significantly below the average, while cluster 3 is doing well, with the same exception as before, that is the oppression of the press.”

Notice the clusters of countries mentioned above in the list on the percentage of letters that returned. All letters returned from Finland and it is in 5th cluster. Similarly, other countries which didn’t return any letter and featuring in cluster 1. Finland is one of the top performers in the Programme for International Student Assessment (PISA) conducted by OECD. Here again, these are correlations and needn’t be a causal link.

The third evidence is the ‘Government effectiveness’ rankings of ‘Worldwide Governance Indicators’. It gives percentile rank but they have been converted into ordinal ranks for the purpose of comparing them with PISA rankings. Government effectiveness indicators are for around 200+ countries while PISA rankings are available only for 70+ countries. So, only common countries between these two lists are considered. Slovakia has PISA rankings but it isn’t in the list of government effectiveness indicators and hence it’s removed from the analysis. Shanghai, China is also removed because government effectiveness indicator isn’t available for this separately. The scatter plot of rankings looks as below. 


Please note that lesser the rank (numerically) better the government effectiveness and better the learning outcomes. The pearson correlation coefficient (used for correlations of ranked data) is 0.8 (0 meaning no correlation and 1 meaning high correlation. Countries who have better rank in government effectiveness also seem to be better at PISA rankings. This is merely a correlation and doesn’t control for other factors but it gives a suggestive evidence.

The fourth evidence is ‘effect of institutions on PISA scores’. Institutions operate through rules and regulations and power dynamics is an important part of it. This paper explores five such institutional features of an educational system and their effect on the PISA scores.
  1. centralized exams
  2. the distribution of decision-making power between schools and their governing bodies
  3. the level of influence that teachers and teacher unions have on school policy
  4. the distribution of decision-making power among levels of government, from local to national
  5. the extent of competition from the private-school sector

International mean of PISA scores is 500. The paper says – “taken together, the effects of all these institutional variables add up to more than 210 points in math and 150 in science. In other words, a student who faced institutions that were all conducive to student performance would have scored more than 200 points higher in math than a student who faced institutions that were all detrimental to student performance. In short, institutional variation across countries explains far more of the variation in student test scores than do differences in the resources devoted to education.”

Let us agree that institutions do matter but what is the mechanism through which they affect the learning outcomes? How is the lack of state capacity manifested in the process of education? This famous experiment conducted in Kenya illustrates this. This is the fifth evidence.

Contract teachers showed significant gains in test scores in evaluations conducted both in India and parts of Kenya but these programmes were typically implemented by an NGO. In this study, the authors replicated the same intervention in Kenya, where the same programme was implemented by an NGO in some schools and administered by government in some other schools. It says “NGO implementation produces a positive effect of test scores across diverse contexts, while government implementation yields zero effect. The data suggests that stark contrast in success between the government and NGO arm can be tracked back to implementation constraints and political economy forces put in motion as the programme when to scale.”

At the time of the study, there was an ongoing protest by teacher unions demanding permanent employment to contract teachers. The results suggest that, there is a negative effect on the performance of contract teachers in schools where the government is administering the programme, when they are exposed to the national controversy. On the other hand, exposing to this controversy had no effect in performance of contract teachers where the programme was administered by the NGO. It is also important to note that, as per the data, government's recruitment didn’t lead to entry of low quality teachers but led to poor outcomes. The prospect of permanent employment seems to have reduced the efforts of contract teachers in schools with government led implementation. This illustrates the incentive structures in the government bureaucracy which affects government’s capacity to implement a programme.

Thus, evidence suggests that there is a relationship between state capacity and learning outcomes; weak state capacity affects the learning outcomes negatively.

In the next post, this blog explores the relationship between state capacity and learning outcomes in context of India.

Stay tuned. Do subscribe and please share the feedback. :)

No comments:

Post a Comment