Share
Designing and Implementing the Bharat Survey for EdTech – Notes by Development Solutions
By Harish Doraiswamy, Jagannath R. and Shreyashee Roy
Jun 20, 2023
We caught up with Jagannath R., Senior Manager of Monitoring, Evaluation, and Learning at Development Solutions, around the Bharat Survey for EdTech (BaSE). In this interview, he shares key learnings, right from the survey design phase to the process of generating insights and findings. Additionally, he provides a set of recommendations for everyone consuming and interpreting the insights derived from BaSE.
In April 2023, Central Square Foundation (CSF) launched the Bharat Survey for EdTech (BaSE), a first-of-its-kind household survey that brings out the voice of end-users from low-income contexts and their needs, challenges, and perceptions regarding the adoption of education technology. The survey was conducted in six states, reaching out to 6,030 households across urban and rural settlements, and 9,867 children enrolled either in government schools or affordable private schools.
CSF partnered with Development Solutions (DS)—a research and advisory agency offering support to government organizations, philanthropic groups, international organizations, educational institutions, and civil society organizations since 2014, to carry out the research and analysis for the survey. Their involvement included designing the survey methodology, collecting and analyzing data, and generating meaningful insights and recommendations based on the findings.
We caught up with Jagannath R., Senior Manager of Monitoring, Evaluation, and Learning at DS, who talks about key learnings, right from the survey design phase to the process of generating insights and findings. Additionally, he provides a set of recommendations for everyone consuming and interpreting the insights derived from BaSE.
Could you elaborate on the process of designing and executing the survey methodology, ensuring the collection of dependable and accurate data?
When DS came on board, considerable discussions took place with CSF to align specific objectives of the BaSE survey with a suitable survey methodology. This involved thorough deliberation and collaboration to ensure that the chosen methodology effectively captured the desired insights and met the project’s overall objectives.
After careful consideration, we decided to include six states in the inaugural survey round, representing each region of India. Uttar Pradesh (North), Telangana (South), Odisha (East), Gujarat (West), Madhya Pradesh (Central), and Mizoram (Northeast) were chosen based on factors such as population size and internet penetration. The sample size target (1000 households in each state) was arrived at, after considering:
- Representative results for each state;
- Representative results within a state for urban and rural settlements.
To accommodate the designated sample size, we adopted a multi-stage sampling approach. In the first stage, we clustered the districts within a state into four regions using spatial machine learning and randomly selected a pre-defined number of districts within each zone. Then we arbitrarily selected blocks within a district and a random number of settlements within each block. To build this intricate sampling strategy, we primarily relied on the Census 2011 data.
To ensure consistency and validity of data collection across all six states, extensive preparatory work went into creating survey protocols, sampling, and field protocols. These protocols encompassed various aspects, such as establishing a set of buffer settlements that could be used if the initially selected settlements were inaccessible for any reason. Further, detailed field movement plans with comprehensive instructions to field managers and field supervisors were provided to ensure that data collection is in line with the survey methodology.
During the data collection process, the complete data pipeline was automated to run quality control processes at a high frequency (often multiple times a day) to keep a close track of data quality, consistency between sample strategy, and data collection using geo coordinates, and enumerator performance. If any survey had to be verified, the enumerator would reach out to the respondent by phone to re-check particular responses.
What were some of the field observations that presented both challenges and opportunities during the survey implementation?
A key challenge that we encountered while finalizing the overall approach and methodology was defining ‘low-income households’. Due to the diverse and varied nature of households across India, there is no universally accepted definition. The most appropriate way to arrive at a conclusion was to look at a combination of the type of school a child was enrolled in (government or private), the annual school fees paid (in the case of private schools), and the Public Distribution System (PDS) card type (with every state having a unique color coding and eligibility criteria).
Another key learning that came from the preparatory phase of the survey was field testing the survey tool. The tool underwent three rounds of testing, enabling us to make refinements and improvements. For instance, we paid attention to the linguistic nuances and how people commonly referred to EdTech, such as ‘online classes’, ‘classes on the phone’, or ‘Zoom classes’. Additionally, to make enumerators feel comfortable with technical terms, we used pictures and examples of the terms. We even incorporated contextualized words relevant to the regional language, into the enumerator manuals to ensure that they were well-suited for effective communication during the project.
Are there any notable limitations or potential biases in the survey data that should be taken into consideration when interpreting the results?
A limitation that may emerge is that the respondent is the primary caregiver (typically the mother or father) in a household. While children were present in some households during the interview process, it may be possible that ‘response bias’ crept in, where respondents provided answers they believed were more socially acceptable.
Additionally, there is a section where we explore parents’ awareness of technology and its influence on children’s use of smartphones for learning purposes. As children studying in higher grades are more likely to use their own devices, there is a greater likelihood of parents having limited awareness of the specific nature and extent of their older child’s mobile phone activities.
How does the survey data align with current educational trends or policies? Are there any areas where the data suggests a need for adjustments or improvements in existing practices or policies?
Among the noteworthy insights that the survey results provide, two notable highlights include:
- EdTech as a supplementary source of learning;
- The cost of the device and the cost of the internet as a barrier to EdTech
About 50% of children who were continuing to use technology for learning (at the time of the survey) saw it as a supplementary source of learning for the child rather than as a replacement for classroom learning. Personalized adaptive learning (PAL) and the self-paced nature of EdTech solutions were some of the often cited reasons why parents found the usefulness of EdTech solutions. EdTech solutions that closely align with school curricula across grades are probably the ones that are more likely to be preferred based on our survey results.
Secondly, among those households that did not have access to mobile devices or children who had restricted/shared access to mobile devices, the cost of devices was cited as a primary barrier. Even among those households that had mobile devices but had restricted access to the internet was a result of monetary considerations. These require strategic interventions.
View Article References
Keywords
Authored by
Harish Doraiswamy
Project Director , EdTech, Central Square Foundation
Jagannath R.
Senior Manager - Monitoring, Evaluation and Learning, Development Solutions
Shreyashee Roy
Project Manager, EdTech, Central Square Foundation
Share this on