Tag Archives: Oscar Night Syndrome

Needed: A Paradigm Shift in ICT4D

In the world of ICTD, failure is widespread and results are controversial. One of the reasons behind this widespread failure is the need for the organizations to always look good, dubbed the “Oscar Night Syndrome.” Because of this, there doesn’t exist a solid (arguably any) Monitoring and Evaluation culture in ICTD. It’s been proposed that targeting the Oscar Night Syndrome requires revamping the M/E culture. While this is an important aspect, I think what we really need here is a complete paradigm shift in the way we approach ICTD.

One of the reasons driving this need to always look good is the need to impress donors to receive project funding. But what if instead of expecting clean, cookie-cutter results, donors wanted the real picture? What if they were more impressed with honesty and detailed results from intensive monitoring and evaluation techniques than with masked lies and feigned success? Furthermore, the nature of these grants are often on the short-term, meaning there is little time to let the entire project play out and positive results to come through. If grants were extended, a project that seems to be faltering at the beginning would have time to use effective m/e techniques to tweak and improve the project as it goes.

To come to this place of non-judgement and openmindedness in terms of failure, there must be an open discussion between all involved in the projects: the donors, project coordinators, recipients. If projects are failing simply by a superficial desire to not let anyone down, we have a very easy solution at hand: not demand that no project ever fails. If a concerted effort was made to hold workshops and seminars at the ICTD conferences worldwide, we could begin to open the discussion and expectations to one of honesty. Once everyone is one the same page-that it is better to admit a project’s flaws and learn from them than to cover it up-the widespread failure of ICT projects wont be so widespread.

Advertisements

Oscar Night Syndrome in ICT4D

After viewing the video posted for us by Professor Ports and reading my fellow students commentaries on the failures of ICT4D, my question is as follows: who is holding ICT endeavors and technology programs accountable for their failures?  In the Oscar Night Syndrome article by Wayan Vota, the author deliberates on the fact that while technology is constantly evolving and progressing, with less successful products disappearing from the markets as fast as they appear, ICT2D projects rarely document their failures in a tangible way that could allow others to learn from their mistakes.  This “need to always look good”, or “Oscar Night Symdrome” is one of the major elements holding ICT4D back from greater success.  I agree with Vota that the need for monitoring and evaluation is a piece of the puzzle that is, at the moment, missing from the ICT4D formula- without it, progress cannot be achieved.

ICT4Dpic

Vota attached this graphic to the article to show the lack of monitoring and evaluation culture in the current ICT culture.  Although funders with expectations for indicator change may demand some M&E information, there is currently no standard to which all projects are held to present this information to the general public.  It is generally “an afterthought” and lacks any qualitative analysis of a projects impact on the people interacting with it.  Donors want to see results, and thus failures or wasteful measures, which are potential realities of any ICT project, are swept under the rug and rendered useless to the community at large.  The need to document and learn from mistakes in ICT4D is one of the major missing links in the implementation of communication technologies for development- while it is good to celebrate success, it is more important to understand and learn from failure.  Implementing regulated monitoring and evaluation practices that must be observed by the ICT community as a whole will work wonders in curing the “Oscar Night Syndrome” and aid in implementing projects that will be more likely to succeed due to improved information on what constitutes a successful project versus a failed one.

And the Oscar Goes to…

In my humble opinion, it is wonderful to have so many ICT4D projects to look out for. Most of these projects have good intentions and goals, BUT it seems there is a clear problem (as many of my classmates have identified below) with identifying and admitting that ICT4D projects have failed. Oscar Night Syndrome has taken over the ICT4D sphere, instilling a false hope that all ICT4D projects are successful and beneficial to the communities in which they are implemented. This syndrome creates pressure to always make ICT4D projects look good, even though sometimes failure is obvious. I read an interesting article on ICTworks about the Oscar Night Syndrome and how exactly to begin confronting failure. One problem is the lack of implementation of M&E (monitoring and evaluation). The article suggested 4 ways to improve upon M&E. These included:

  1. Quasi-Experiments
  2. Qualitative Analysis
  3. Common Standards
  4. Implementation Evaluations

All four suggestions could possibly begin to change the ICT4D culture of hiding failure. Quasi-experiments (experiments lacking the random assignment to a control group) would be a lower cost way to perform experiments while still collecting data about the success of the ever changing technologies implemented. These experiments in general would ensure that the outcomes of ICT4D projects were measured not only during the project implementation but also in years after. Another way to ensure this data is measured and evaluated is by emphasizing the use of quantitative AND qualitative data collection. Yes, quantitative data is helpful, BUT many times qualitative data gets to the root of the projects, finding the true impact the project has made on the recipients. This data can be collected in a variety of ways, including focus groups, observation, and even social network sites. 

In order to improve upon these first two categories, the language and standards of M&E must also be clarified. The measurements of success and failure must be quantified in some way in order to show the success of a specific project. The article goes further to suggest the possibility of an M&E framework that could create a comparison of different projects and their success and effectiveness. Finally, the article suggests an implementation evaluation. This would allow the ICT4D project developers to see their projects’ implementations alongside their peers’. It would create an opportunity to learn from others’ mistakes in implementing their projects. 

I believe all of these strategies for improving M&E would help ICT4D as a whole by removing the stigma from failure. If project failures continue to be swept under the rug, the same mistakes will keep being made over and over again. M&E will help ensure that project failures are pointed out and the Oscar Night Syndrome looses prevalence in the ICT4D sphere. 


Lessons in ICT4D

Before taking this class, I didn’t think much about the role of technology in development. Of course I recognized the significance of the spread of the Internet and knew how certain technologies could enhance a development project’s overall goal, but I hadn’t considered that information and communication technologies could be the central focus of a project. ICTs are useful tools that can bring us closer to development goals if used creatively. Learning about the uses of ICTs in development was helpful based on the lessons that both the successes and failures of ICT4D projects can teach.

One of the lessons that kept recurring throughout the class was the idea that project plans should be driven by the people they aim to help. In the case of many projects donors take control and manipulate the goals to either fit their idea of what will be helpful or fit their idea of what will look good from the outside. We looked at case studies where organizations with good intentions failed because they did not communicate with their target population. Without understanding a community’s needs an outside organization cannot successfully provide development aid. We saw this in the case of One Laptop Per Child. The recipients and teachers were not consulted with to assess their needs or the possible constraints that could get in the way of the project’s success. As a result, the project has had little effect on education indicators in its target populations.

One Laptop Per Child also teaches us about the danger of focusing on a project’s image. Their video showing children in under-developed areas carrying laptops appealed to the audience’s emotions and tried to portray the idealism of the project. This is an example of Oscar Night Syndrome, or the tendency to choose projects or methods based on their outward appearance and “shininess”. We studied many projects that failed based on a disconnect with reality stemming from a desire to provide immediate impressive results rather than sustainable long term improvements. This is even more of a concern with ICT4D projects than development projects in general based on their tendency to rely on technology to produce results. Technological determinism is dangerous in ICT4D because it fails to take important factors into account.

I learned the most about ICT4D from real world case studies. Many of these lessons came from their failures, showing us what not to do. But during our video conference with Wayan Vota, he compared the percentage of business failures in Silicone Valley to the percentage of failures in development projects. While it is estimated that approximately 70% of development projects fail, the 30% success rate is substantially higher than the 10% success rate of business start-ups in Silicone Valley. Putting things in this perspective helps to affirm that all is not lost in the world of international development. While rates of failure are high, we can learn from our mistakes to improve the effectiveness and efficiency of future projects.


Oscar Night Syndrome and FAILFaire

One of the biggest problems in the international development field is a lack of data after development projects. We don’t really know if existing projects were truly successful, or if they failed at achieving their objectives. This is partly due to the difficulty in monitoring and evaluation, but also partly due to the fact that no organization wants to publish poor results. The latter of these ideas is referred to as “Oscar Night Syndrome”, meaning that there is always a need to “look good” in the development field.

Specifically relating to ICT4D, one article writes, “No one ever fails in ICT4D. Isn’t that amazing! Technologies come and go quickly – bye, bye PDA’s, Windows Vista, and soon Nokia – yet in ICT4D, each project has impact and we never fail. We just have lessons learned. In fact, can you name a single technology program that has publicly stated that it failed?”. The article proposes 4 areas where Monitoring and Evaluation can be improved in the ICT4D field.

1) Quasi-Experiments

Quasi-experiments have a leg up on ‘randomized control trials’ as they are more realistic and ethical. Projects must be tracked over a longer period of time in order to most accurately measure whether or not the projects are successful.

2) Qualitative Analysis

This requires more results than simply numbers, meaning in person interviews, focus groups, observations etc. This can better guide future project design.

3) Common Standards

There needs to be common language and measurements in the development field, to allow for comparison of apples-to-apples to most accurately compare project effectiveness.

4) Implementation Evaluations 

This should answer the question: “Was your implementation of that project the best it could be?”.

Overall, the article proposes a change of mindset and culture of ICT4D to be more aware of project failures. An example of a website that does this is called FAILFARE. This website reports on the failures of ICT4D projects. They try to “take a close look at what didn’t work and why the projects failed amidst the ICT4D hype we all are subjected to (and sometimes contributors to). We believe that only if we understand what DOESN’T WORK in this field and stop pushing our failures under the rug, can we collectively learn and get better, more effective, and have greater impact as we go forward.” The hope is that by looking at why projects fail, new data and information can be gathered to, in the future, create and implement better and more successful development projects.


Summary of the New York Times Review of FailFaire Party

New York Times Review of FailFaire Party

The Oscar Night Syndrome in development is the need to “always look good,” meaning that donors do not want to admit to failed programs and wasted money, but rather will try to spin their failure into a success.  Our last class, we learned about an initiative, FailFaire, which aims to bring to light these failures in order for the community as a whole to learn from them.  The New York Times recently attended a FailFaire party, which was an opportunity for those in the field to discuss development programs that had not worked, and why they were not successful.  The article highlighted two specific development programs.

  • Tim Kelly, working for the as a technology specialist for the World Bank, was highlighted at this party for his failed attempt to foster the expansion of the Internet in developing countries.  His failure was attributed to too many different donors with their own priorities working for the project.  The article stated that, “Next time he would advocate for an initiative that matched specific donors to specific projects and not work so hard to be all things to all people”. Although it was embarrassing for Kelly to be highlighted for his failures, this lesson learned was important, and could be useful in ensuring success for future projects.
  •  Mahad Ibrahim was a Fulbright Scholar working in Egypt, who was trying to work with the Egyptian government to create telecenters throughout the country in order to provide greater Internet access.  However, due to the rise of Internet cafes across Egypt, this program ended up as a failure.  This was a useful lesson in remembering to not just provide technology that works int he country the donors are from, but rather to look at what is being used in the country that needs the service currently, and expand on that.

In the end, the worst failure was awarded the O.L.P.C, which is named after the One Laptop Per Child program that many in the development community view as a failure.  This article showcases the lighthearted nature of the Failfaire initiative, but also explains how important it is, in that it provides a platform for those in the ICT4D field to learn from each other, and not keep repeating the same mistakes.