- The National ICT Policy was retrieved from the Ethiopian Free and Open Source Software Network (EFOSSNET). It was published in English, and references the “ICT for Development 2010 Plan.”
- The Ministry of Communication and Information Technology directs Ethiopia’s ICT policy, and their website can be found here. This resource was of limited value in writing the short papers because many links are inaccessible or in Amharic.
Author Archives: dlach
Over the course of this semester, several concepts have emerged consistently through each module. When considering the biggest failures in ICT4D (specifically OLPC), a lack of context consideration seemed to be the most consistent factor contributing to that failure. In the interest of monetary gains, aid projects are sometimes deployed in a one-size-fits-all manner that simply cannot accommodate the most marginalized groups and their unique interests in completely different countries. While this notion has been especially stressed through most IDEV classes I’ve taken at Tulane, the idea that existing technologies are often the best way to overcome barriers to access was particularly emphasized in ICT4D. It seems intuitive that individuals would be most comfortable embracing development efforts when familiar with the technology tools, but the appeal of shiny new technologies tends to cloud the application of this approach.
From my personal experience in the class, I was most interested to see the ways that development organizations persuade end users to adopt new ICTs. It’s easy to overlook the fact that recipients need substantial incentives, another seemingly intuitive notion that stuck with me, and would likely have value in a development profession. Most of the time, ICTs are not valuable until many users have adopted them, so providing ample incentives to get the technology tools in the hands of the preliminary users are especially challenging, yet pertinent, to provide. One theoretical concept, ICT 2.0, nicely integrates many of these important considerations in its approach by advocating community input (which assists in determining users’ interests/values) and improving existing technologies instead of simply adding novel technologies. From my time in this class, I feel that an approach based off of ICT4D 2.0 as a starting point would be more likely to succeed.
Having heard Adam Papendieck discuss the value of big data, I looked into an article about how it’s being utilized in the Caribbean. Given the abstract nature of these huge sets of data, Michele Marius characterizes big data by the 3 V’s (and three corresponding challenges in big data processing): volume, velocity, and variety. She explains that the volume is growing exceedingly large, making conventional processing expensive and tedious. With velocity, she explains that the rate at which data is generated is vital, as many firms must be able to process and evaluate the information in real time. Lastly, as we heard from Adam, big data can come from a variety of sources, and the analysis of each source’s data together is becoming increasingly relevant.
As Adam described the trends in ICTs around the world, there are trends developing in how big data is utilized as well:
- With analytics becoming more effective and faster, the value of big data may be realized by groups other than just big corporations
- Data may become even more commoditized, but mainly of value to organizations as opposed to individuals
- There is a shift toward using big data to provide more personalized user experiences
Despite these exciting developments in this valuable yet expansive information, significant issues must be addressed. Although big data allows for better end user services, private information is often compromised, and Marius warns that consumers may have to concede even more privacy in the future. In addition, she explains that the U.S. alone may be lacking up to 190,000 trained analysts by 2018 to make effective decisions with big data.
In developing countries, the prevalence of data mining is presumably limited in volume, but not necessarily utility. As seen with the use of Twitter in crises, even relatively scarce big data can offer impressive insight. With big data processing still emerging in these areas, firms that emphasize it can gain a competitive advantage over similar firms.
Do you think a solution exists in the balance of privacy vs. firms’ interests? Can you think of any additional challenges associated with analyzing big data?
In an article on ICT Pulse by Michele Marius, she describes a seminar in Jamaica from earlier this year in which she discussed the cyber security of individuals as well as government organizations in the Caribbean. As she explains, the Tax Administration of Jamaica (TAJ) was hacked for valuable information, which was subsequently shared on Twitter (not the most productive use of ICTs). The TAJ never confirmed this cyberattack, there was very little information on it in the local media, and some reports suggest the TAJ was not even aware of the attack.
Marius prescribes the implementation of Computer Emergency Response Teams (CERTs), which help to prevent cyber attacks, or at least limit damage. She suggests that CERTs’ expensive nature should not deter Caribbean nations from paying for their services as cyber attacks are becoming increasingly prevalent and costly. She also describes the interesting notion that these cyber attacks in developing countries have the capacity to go unreported to citizens. As a result, when cyber security is breached, she insinuates that there must be an established trust between the state and its citizens, so the individuals can take appropriate precautions, particularly in these developing countries where alternative news sources like Twitter are less accessible.
In our group’s examination of the potential for ICTs in government, we evaluated the array of challenges associated with instituting e-government in regions devoid of widespread internet access or smart phones. In an article by Norris and Moon, published in the Public Administration Review, the authors consider the utility of e-government in the US, and why it still has a long way to go.
Norris and Moon explain that all federal agencies, state governments, and 80% of local governments currently have websites, although the sites are largely very basic with only simple downloadable forms and static information pages. They argue that the establishment of two-way transactional e-government (making payments, recording complaints, etc.) at the grassroots level (city or county) is vital because these websites offer the most services directly to the people, and therefore have the greatest potential impact. Ultimately, the authors propose that ICTs can improve efficiency, accuracy, timeliness, effectiveness, and extend workers’ capacity to work.
In our presentation of ICT4D in government, we determined the main challenges to be centered around lacking infrastructure and technology literacy. In contrast, the authors’ interviews with government employees show that the two biggest challenges facing e-government in the US are lacking staff devoted to the website and lacking financial resources. These barriers are substantially less daunting than those facing developing countries, and could be alleviated in the foreseeable future. With increasing resources devoted to researching these potentially valuable technologies, it seems likely that additional government funding could be used for e-government. In the US, where this funding is much more accessible, both of these challenges would be effectively mitigated. Ultimately, in the context of a developed country, ICTs appear more immediately useful, and will offer citizens and government workers alike greater convenience as they are slowly adopted and deployed.
Following our class period with Robert Munro, I found myself browsing through his Twitter and found an article describing his PhD topic in an August 9th Tweet. Within the article, he elaborates on some of the concepts discussed in class; as he explains, so many of the 5,000+ languages of the world are being written for the first time ever with the proliferation of mobile telephony, but the technology to process these languages cannot keep up. Compounding the problem, these phone users are of varied literacy levels, making for spelling inconsistencies among users. However, he concludes that automated information systems can pull out words that are least likely to vary in spelling (ie people, places, organizations) and examine subword variation by identifying affixes within words as well as accounting for phonological or orthographic variation (ie recognize vs. recognise). The article goes on to provide more technical prescriptions for automated text response services, and he even links to another article in a separate Tweet, which describes Powerset, a natural language search system that ultimately failed, but utilized a few valuable processes.
Ultimately, Dr. Munro implies that the capacity for automated text services in “low-resource languages” is well within reach, particularly because the messages are generally just one to two sentences. Because spelling variations are predictable, they can be modeled, and hopefully reliably answered by automated systems. However, the use of these systems will not be realized until they become more reliable and efficient than human responders, which, as he explained in class, can be extremely effective.