One of the biggest problems in the international development field is a lack of data after development projects. We don’t really know if existing projects were truly successful, or if they failed at achieving their objectives. This is partly due to the difficulty in monitoring and evaluation, but also partly due to the fact that no organization wants to publish poor results. The latter of these ideas is referred to as “Oscar Night Syndrome”, meaning that there is always a need to “look good” in the development field.
Specifically relating to ICT4D, one article writes, “No one ever fails in ICT4D. Isn’t that amazing! Technologies come and go quickly – bye, bye PDA’s, Windows Vista, and soon Nokia – yet in ICT4D, each project has impact and we never fail. We just have lessons learned. In fact, can you name a single technology program that has publicly stated that it failed?”. The article proposes 4 areas where Monitoring and Evaluation can be improved in the ICT4D field.
Quasi-experiments have a leg up on ‘randomized control trials’ as they are more realistic and ethical. Projects must be tracked over a longer period of time in order to most accurately measure whether or not the projects are successful.
2) Qualitative Analysis
This requires more results than simply numbers, meaning in person interviews, focus groups, observations etc. This can better guide future project design.
3) Common Standards
There needs to be common language and measurements in the development field, to allow for comparison of apples-to-apples to most accurately compare project effectiveness.
4) Implementation Evaluations
This should answer the question: “Was your implementation of that project the best it could be?”.
Overall, the article proposes a change of mindset and culture of ICT4D to be more aware of project failures. An example of a website that does this is called FAILFARE. This website reports on the failures of ICT4D projects. They try to “take a close look at what didn’t work and why the projects failed amidst the ICT4D hype we all are subjected to (and sometimes contributors to). We believe that only if we understand what DOESN’T WORK in this field and stop pushing our failures under the rug, can we collectively learn and get better, more effective, and have greater impact as we go forward.” The hope is that by looking at why projects fail, new data and information can be gathered to, in the future, create and implement better and more successful development projects.