The illusion of expertise - Information is NOT theory

The New Yorker has a wonderful article titled "Why facts don't change mind?", on the cognitive theory behind confirmation bias. This section caught my attention.

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. 
Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear). 
Sloman and Fernbach (researchers who conducted this experiment) see this effect, which they call the "illusion of explanatory depth", just about everywhere. People believe that they know way more than they actually do.
The article continues the reasoning behind the illusory depth but I think there's also a different explanation.

People conflate information with theory. While information comes through the virtue of being in the situation, theory is an explanation of the underlying mechanisms of the system.

Familiarity gives people a false sense of expertise. Consider the following two examples.

Some people think that they know about India more than a foreigner because they actually live in India. Usually, foreign researchers are derided for their lack of familiarity with Indian context. However, in some cases, it's actually the case that foreigners know more about India than those who live here.

Bureaucrats are another example. Just because they are working in the government, they have an illusion that they "understand stuff", which isn't true in many cases.

In both these cases, people conflate information with a theoretical understanding of the cause-effect mechanisms and big picture. 

People may be living in India but it doesn't mean that they have a thorough understanding of the mechanisms and big picture. A foreign researcher who has spent time on understanding this, knows better. But, the ego rooted in the virtue of living in India doesn't help people realise that.

Same is the case with bureaucrats. By the virtue of working in the system, they have experience, which is essentially "information". Possessing such information gives them a false sense of expertise, which they needn't have. On the other hand, researchers or those who see the big picture, process the information and uncover the underlying mechanisms. Only, they can claim to have "understood stuff". 

In other words, bureaucrats often conflate information (about the system) with theory (of underlying mechanisms). Even if they have some understanding, often it's fragmented and half-baked, because most don't make an effort to systematically process the information. Only a few of them and researchers do that job and thus have greater understanding of the system and its mechanisms.

But like everything other person residing in India who claims to have expertise on India by the virtue of living in India, bureaucrats also claim expertise by the virtue of being in system. At the risk of repetition, both conflate information with theory.

If one observes closely, this is also reflected in the way bureaucrats' discourse on policy. The analysis of bureaucrats is on the lines of 'xyz scheme' has come, 'abc' were problems (most administrative related) etc. It's not the true analysis and understanding of the system. True understanding requires systematic thinking to differentiate proximate and root causes. Often, this isn't the case with bureaucrats. Probably, it's also the reason for their approach of coming up with another scheme in response to failure of the earlier, without realizing that the problem may not lie in the particular schemes but somewhere else.

Same can be extended to teachers, doctors and others as well. A usual teacher in a government system has information, gained through experience but may not have a theoretical understanding of root causes. One can extend this in several other dimensions but hope these examples illustrate the point.

Most people deride theory but the above discussion illustrates its importance. Theory essentially imposes some order on the information. It thus helps us join the dots and understand the system better.

PS: Strictly speaking, the term information has a different connotation. Insights obtained by processing data is called information. Without going into those nitty gritties, I suppose one appreciates the context of its usage here. The term information here is used more in the sense of familiarity.


Follow on Facebook: @iterativeadapt
Follow on Twitter: @iterate_adapt
Email subscription or RSS Feed: Enter id in the "Subscribe" text box, on the top right of the blog.

On codifying grade-specific learning outcomes in RTE

Union Government has notified a new RTE rule, mandating state governments to codify grade-specific learning levels. This is a sensitive issue and needs to be dealt with nuance.

One of the primary critiques of RTE is that it focuses on infrastructure and NOT on learning outcomes. The logical deduction of this line of argument is that RTE should (also) use learning outcomes as the metric. While, it's desirable to shape the act in terms of learning outcomes, its effects depend on the way these outcomes are detailed. 

There are genuine philosophical and practical concerns with this approach which have to be taken care of.

One, the approach of codifying grade-specific learning outcomes is against the whole emerging theme of 'learning at one's own pace' and 'teaching to the right level'. If what a student is supposed to learn within specific time (1 year) is fixed, it reduces the flexibility of learning at one's own pace. 

A large number of pedagogy specialists therefore vehemently oppose any such strict mandates. US followed a similar approach with No Child Left Behind Act with not so great success and some in fact call it a failure.  I have detailed the debate regarding assessments in general, in my book, in the Appendix. One may refer to that for a wider context. For the purpose of the post, let's stick to the particular issue at hand.

Two, the act of measuring learning outcomes can lead to other policies in future that make teachers accountable to the learning outcomes of their class. This, many argue, has the potential to have disastrous effects. Teachers teach to the test, encourage cheating, teach only to the top of the class and so on. To those, interested, my book has a separate chapter summarising the insights from literature on teacher incentives.

Three, there are implementation issues and concerns regarding copying etc.

Leaving out the implementation concerns, which are characteristic of any large scale effort, other concerns are genuine and hence have to be dealt carefully. Let us first explore the need for such decision and then come to the concerns regarding the decision.

The need for grade-specific benchmarks

1. To change the "incentive-architecture" of teachers: While opponents of codifying outcomes do have a point that codifying grade specific learning outcomes goes contrary to the philosophy of learning at one's own pace, they are over emphasising it.

If we actually look at the existing scenario, the yearly performance metrics of teachers already exist and teachers are made accountable to them, except that the current metric is completion of syllabus. 

The syllabus completion psychology has deteriorating effects. With the syllabus completion mindset of teachers, even the best efforts to improve the governance systems won't be fruitful because teachers always follow the mandate of completing the syllabus. 

Esther Duflo in her Ely lecture points that efforts to implement pedagogically sound Pratham's Teach at the Right Level in government schools didn't yield much success initially, due to the "incentive architecture" of teachers. In this particular case, a special session each day within school time, dedicated to addressing remedial needs of students was neglected by teachers because teachers' were working with syllabus completion mindset during school time. When the same dedicated session was implemented after school hours, it proved to be successful, because teachers were working outside the 'syllabus completion' framework.

Even the CCE has been reduced to monitoring teachers' compliance with 'correcting assignments' and 'updating marks', with no focus on learning.

Education minister of a state recently remarked that "we have become slaves to syllabus completion".

I had hence argued earlier that traditional curriculum of first three years should be kept aside for some time and schools should focus only on ensuring reading, writing and numeracy. I noted that this should be the first and foremost step to be taken by any government interested in improving education quality.

The new rule codifying minimum outcomes does something to the similar effect. It highlights the need to ensure these minimum outcomes, along side completion of syllabus.

2. To prevent alibi system: While we should let students learn at their own pace, one needs to identify threshold levels. It can't be the case that a student is in school for 5 years and still can't read sentences - learning at own pace.

Outcomes is a function of both teacher's efforts and child's background, along with other things. Over time, children's background has become alibi for teacher's non-performance.

Codifying "minimum expected outcomes" strikes a balance between both factors - child's background and teachers' efforts. It's a way of saying that irrespective of child's background, anything below this is unacceptable. In other words, if the outcomes are below this level, its clearly the issue with teachers' efforts and not the child.

3. To engage teachers in a conversation on learning: Lack of any outcome metrics results in a situation where even teachers making sincere efforts don't have metrics to measure or anchor upon. Outcome metrics are useful to engage teachers in a conversation on learning, moving away from system of completing syllabus.

4. To understand problems better and make better decisions: There's serious dearth of education data in India. This hinders our capacity to understand the problems and pin point the root causes. The huge data generated through this exercise that helps us to understand the context better and help us make informed decisions.

Addressing the concerns

Government needs to ensure the following things to address concerns regarding negative effects of codifying grade specific learning outcomes.

1. Mandate only minimum standards: Note the emphasis on the word minimum. As discussed earlier, one should balance teachers' efforts and child's background.

Such balance can be achieved if we mandate only minimum standards and not absolute standards. Minimum standards mean that they have to be achieved irrespective of the child's background. But, the same thing can't be mandated for absolute standards because effect of child's background comes into play.

2. Minimum standards are to be based on the capacity of the system: For the sake of simplicity, assume capacity of the system is similar to capacity of a person to digest food. A good metabolic system can digest mutton biryani, these are systems that can achieve "absolute standards". A poor metabolic system survives on saline because it can't digest complex food.

We noted earlier that child's background can come into play in achieving absolute standards. Capacity of the system decides the extent of that gap. A high capacity system can make even a child with poor background reach absolute standards. It can digest (teach) even complex food (poor background).

It also follows from this that absolute standards overburden a low capacity system and at the same time, low standards under utilise the potential of high capacity systems.

For instance, consider two extreme cases - a low capacity system, a state where students learn nothing in 5 years of primary school. This system is like a sick person. Their food intake has to start with saline and not mutton biryani. Similarly, in such cases of low capacity systems, one has to start low. Basic reading and numeracy can be the minimum standards to start with.

On the other hand, consider a high capacity system, say Finland. These are the kind of systems that can digest mutton biryani. Just like serving only saline to healthy person leads to under utilisation of their capacity, using standards meant for low capacity systems for such high capacity systems also leads to under utilisation of its full potential.

In short, the minimum requirements also have to be linked to the capacity of the systems to avoid over burdening or under utilisation of the system. Note that one should aim to increase capacity with time and increase the standards slowly.

This also illustrates the need to mandate only minimum outcomes. Our systems are of low capacity. If absolute outcomes are mandated, teachers might resort to other means to reach those levels - copying, teaching to the test etc. Also, it isn't fair if one considers the background of children. Restricting ourselves to only minimum outcomes is a way around all these problems. This sets the minimum bar, irrespective of child's background.

3. Don't conflate grade-end requirements with end of school requirements: People often conflate grade-end requirements with end of school requirements. Both have different purposes.

Grade end requirements are only meant to be of diagnostic nature, indicating the status of progression. There isn't any harm if one can't achieve them within 1 year. There's flexibility to achieve them taking more time.

End of school requirements are different. They signify the expected outcomes at the end of schooling. They have a different purpose - signalling your ability etc. I had earlier argued that end of school exams also have to be bifurcated - exams that test basic proficiency and exams that can be used to signal ability.

Treating grade-end requirements similar to end of school requirements reduces the flexibility that grade-end requirements are supposed to offer. Conflating these two also leads us to tilt towards setting high standards for grade end requirements keeping the end of school requirements in mind. As discussed earlier, such absolute standards for grade end requirements has potential negative effects.

4. Don't use the learning outcomes to take harsh steps on teachers or to provide monetary incentives, until a congenial situation is created: A part of teachers' non-performance is also a result of the rules that shape them and the apathy of the system regarding their problems. There are also socio-economic characteristics of children that come into play.

In such scenario, if outcome data is used to take harsh actions or provide monetary incentives, before addressing teachers' pressing issues, there will be a resistance to such move.

It is thus unwise to spend political capital on it. Addressing teachers' pressing issues should be taken serious to build trust and earn moral authority to demand outcomes. Until then, it's wise to pursue non-confrontational approach.

To those, interested, my book has a separate chapter summarising the insights from literature on teacher incentives.

5. Focus on building academic support structures: One of the great fallacies in education is that teachers know what to do but the problem is that they are not doing what they know. This line of reasoning means that if teachers are forced to work, one can see results. But the reality is that teachers needn't necessarily know what to do.

Along with codifying minimum levels, teachers should also be given academic support, introducing them to techniques like Pratham's Teach at the Right Level, so that they can achieve desired results. Needless to say, it's a great challenge to change the teachers' mindset to move away from 'teaching at the board' to facilitating groups. It's a challenge worth addressing.

If such support structures aren't in place, we will end up in a situation where people might resort to undesirable methods like copying, cheating etc, as a defensive measure.

6. Communicate the essence of minimum outcomes to states clearly: Codifying grade end minimum outcome requirements can only be successful if we take note of five points discussed above. 

The news article says that NCERT has defined the minimum levels and gives scope to states to make them tougher. This discretion, in my opinion is a double edged sword. While the requirement to calibrate outcomes as per capacity means that such decision has to be decentralised, the essence of the decision (five points discussed above) has to be communicated to states clearly


Overall, shift towards learning outcomes is a welcome step because it changes the incentive architecture of the system but should be dealt with caution. Codified learning levels should only reflect minimum thresholds and should not set high standards. These shouldn't be used to take harsh actions on teachers or even provide monetary incentives until an environment of trust is built. Teachers should be given necessary academic support to achieve the desired results. Finally, the essence has to be clearly communicated to the states.


Follow on Facebook: @iterativeadapt
Follow on Twitter: @iterate_adapt
Email subscription or RSS Feed: Enter id in the "Subscribe" text box, on the top right of the blog.

On utility of RCTs, Academic Research, and Development

[Gulzar Natarjan has two posts on RCTs - Academic Research - and Development. I commented on the post and Gulzar has replied. I am posting my response to Gulzar's reply here because the comment box wasn't accepting my comment due to its length.]

1.  There are different types of RCTs and we need to segregate them for analysis. Briefly, they can be categorized as a) RCTs that test hypothesis of binding constraints (Is problem with teachers or the pedagogy?); b) RCTs that test programmes or interventions; c) RCTs that test the cost-effectiveness of programmes.

Each of them have different applications. 

2. For policy making, the first type of RCTs (those which test hypotheses of binding constraints) are extremely useful because they help us do a systematic 'first-principle' analysis to weed out competing hypotheses regarding binding constraints. An individual RCT may not seem useful here but a collection of them can be useful.

A first principle analysis of an education system can look like as follows: 

Let's consider a classroom. What's the issue here? May be the teacher doesn't have information on the level of learning of students. Now you refer RCT (the one done in AP) that does an intervention on this which finds that giving diagnostic information to teachers doesn't lead outcomes. Then you revise your prior - Ok. If this is not the reason, what else is?

Someone might say - may be the technical know how (using Deaton's term) of teacher's pedagogy is the reason. Now, you consider a pedagogy that is good at technical-know-how (proved through an RCT - Pratham's TaRL) and  implement it in a classroom. You find that are no outcomes even with pedagogy that's good at technical know-how.

You then say - may be the binding constraint is NOT availability of technical know-how but with the teachers (human agency). You then refer an RCT (Pratham's Bihar RCTs) where the same government teachers teach using same pedagogy (good at technical know how) in different settings - one within traditional classroom during academic year and one during summer (outside usual constraints). You then find that the same government teachers are being effective outside the academic year but not during usual school time.

With all these, you infer that the binding constraint is neither necessarily with availability of technical know-how, nor with human agency but it has something to do with the structure within which the teachers work.

You thus zero-in the problem to the issue of structures within which the teachers work in - therein comes the argument of state capacity as the binding constraint. (This is the line of argument in my book).

This kind of systematic first principle analysis is useful because it helps us to be clear in our thoughts and understand the context better. It also helps avoid what I called 'experts' parochialist world views'. It often happens that one cites only those things as binding constraints, in which he/she is expert in. For example - a pedagogy expert argues that 'pedagogy' is the binding constraint and so on. They often refuse to look beyond that. In such process, competing hypotheses for binding constraints emerge. (BTW - I used para teachers example because I recently saw two TV debates in Lok Sabha TV where this was repeatedly being pointed out by the panelists. I only intended to use this as an example to point to the phenomenon of false traps regarding binding constraints.)

A systematic first principle analysis as above, facilitated by knowledge of RCT evidence helps us peel off these competing hypothesis and get to the core of the problem. RCTs have made such first principle analysis possible, if not for policy makers but at least for others.

In the absence of such systematic first principle analysis, policy makers end up being victims of pedagogy experts (who are traditionally considered as educationists) and parachute complex pedagogies into classrooms, which only backfire. Unfortunately, this is a recurring phenomenon.

3. Examples of government imbibing lessons from RCTs: At the outset, I would like to point to two examples of Pratham's TaRL scale up and deworming programmes but the issue is deeper.

The 2nd type (RCTs on interventions) and 3rd type (RCTs on cost-effectiveness) of RCTs mentioned above have structural limitations. Only those RCT papers are publicized and taken up with government that have shown results "across contexts".

The messy nature of development by definition means that there will be very few examples of interventions that have worked across contexts.

The USP of Pratham's TaRL is that, even with the given constraints and given level of state capacity, gains are still possible, if you do tweaks to the style of teaching - by grouping kids. Hence, it shows impacts across contexts even within low-capacity contexts.

The other advantage of RCTs on Pratham's RCT is that - for the first time it questioned the arguments rooted in philosophy of education who were vehemently against separating children as per ability even in the initial levels. Even after this evidence, some are still against it but RCTs have certainly weakened their position.

The deworming example is an instance of 3rd type of RCTs which push governments to take up an action by showing a value for money. If we think of it , one can say: giving pills to kids with worms is a no-brainer - if kids have worms, why don't you just give pills? Why do you need RCT? It's not so simple because despite such clear logic, governments hadn't take up such programmes. The cost effectiveness of RCTs encouraged governments to take that up.

4. RCTs vs. other studies: We have to be again careful here. We need to segregate the number of studies and their influence. If we just consider the number of studies - RCTs vs. other types is not a zero sum game. Growth in RCTs needn't necessarily stop people doing ehtnographic studies. RCTs just add to an existing variety of papers and not necessarily displace others.

Duflo also points this in one of her lectures citing numbers on trends in economic papers, that suggests that RCTs did not displace other papers but they just "added on" to the existing research of other variety.

Coming to the influence of the studies, rigour is definitely one aspects that makes RCTs seem popular. But, more importantly, a whole institution is built around RCTs whose only job is to publicize this evidence. Hence, it seems more popular.

5. Though a minor point, RCTs is not a lazy way to publication. There's a huge risk and effort involved in carrying them and there's good probability of failure due to execution issues. In fact, Chris Blatmann advices people not to do RCTs for Ph.D thesis because of the risks involved and good probability of failure.

Though there are many seemingly spurious RCTs coming up these days, the one's addressing fundamental questions involve huge effort.

Summing up, policy making requires wide variety of evidence in the life cycle of a policy. RCTs just filled a gap in this process. RCTs help in doing systematic first-principle analysis while designing MVP. Other types of evidence like dip stick surveys, ethnographies help during iterating the project or for coming up with ideas for interventions. 

Needless to say, it's unrealistic to expect RCTs for everything. Somethings are to be done even if there's no RCT or even if RCTs say otherwise.