This week in class, we were fortunate to have Robert Munro as a guest lecturer (learn about him here). Munro is a “computational linguist working in communication technologies,” which makes sense in the context of the case study he wrote about Mission 4636, a humanitarian aid effort that utilized crowdsourcing and text messages in post-2010-earthquake Haiti.
Munro gave us several examples of and lessons learned from humanitarian aid efforts utilizing crowdsourcing to process information: Pakreport (after the 2010 floods in Pakistan), the Libya Crisis Map (to follow events occurring in the country as part of the Arab Spring), and most recently, the Sandy Mapmill (to assess damages following Hurricane Sandy).
From the presentation, it seems that crowdsourcing for Pakreport was relatively efficient because of local people’s involvement, but that crowdsourcing for Libya Crisis Map was a failure. This failure arose from the fact that many locals did not want to be associated with the project out of fear of safety for their lives. Sandy Mapmill might be considered to be in between the other two, as volunteer-based damage assessment was greatly useful; however, it was determined that paying professionals would have been more accurate and cost-efficient than crowdsourcing. In the case of Mission 4636, one can apply the same basic lessons – utilizing locals’ (and/or people of the diaspora) knowledge was vital to the success of the project because they are more familiar with both language and locales than outsiders. It was clear that non-Haitians processed MANY fewer pieces of information than Haitians or former Haitians.
One point Munro hammered on was that private data practices should always be utilized in all cases to protect the identities of people in the disaster zones and that public mapping is generally discouraged, for the same reason.
Another interesting tool mentioned was “Natural Language Processing (NLP),” which is an automated language processing system that allows the processing of large amounts of language information. Of course, there are flaws, as with any automated systems – out-of-context translation and limitations because some languages are not already in the system; however, NLP seems to be a valuable tool when there is just too much information to be processed and may expedite humanitarian relief efforts.
The presentation was very interesting to me since I never even considered that crowdsourcing could be used in humanitarian aid efforts. I’m glad that from Mission 4636, they were able to realize that “insiders” obviously have more helpful information and can be of more use to relief efforts than “outsiders.” This isn’t to say that the “outsiders” made no contribution; they definitely helped, but the majority of information processed was done rapidly and mostly accurately by those who knew the disaster zone.