E - Readiness 2013 - Staging Framework
In general, e-readiness assessment tools can be classified into two broad categories as follows:
- E-economy readiness tools that focus on a nation's or communities' readiness to exploit ICT for economic development (i.e., to take part in the digital economy).
- E-society readiness tools that measure the ability of the overall society to benefit from ICTs (Bridges, 2002).
E-society readiness assessment tools can also gauge the readiness of a nation or community to participate in the digital economy. The CID e-readiness tool, appropriately titled "Readiness for the Networked World, A Guide for Developing Countries," is an example of an e-society tool (CID, 2000).
Although the authors of this report modified the CID assessment tool in 2006 for use in e-readiness assessment of higher education institutions, the method of staging the indicators on a scale of 1 to 4 was not changed (Kashorda and Waema, 2009). The modified e-readiness assessment tool and staging framework used in the 2006, 2008, and 2013 surveys defined 17 indicators grouped into five categories as follows:
- Network access (4 indicators - Information infrastructure, Internet availability, Internet affordability, network speed and quality)
- Networked campus (2 indicators - Network environment, e-campus)
- Networked learning (4 indicators - Enhancing education with ICTs, developing the ICT workforce, ICT research and innovation, ICTs in libraries)
- Networked society (4 indicators - People and organizations online, locally relevant content, ICTs in everyday life, ICTs in the workplace)
- Institutional ICT strategy (3 indicators - ICT strategy, ICT financing, ICT human capacity)
The staging for each of the 17 indicators is derived as an average of the staging for the associated sub-indicators. In total, 88 sub-indicators were staged and were used to calculate the staging for the indicators.
In order to stage the sub-indicator, the researchers developed a staging framework that maps the values of the sub-indicator to a stage. For example, staging for Internet availability was measured as shown in the following table
|Stage level||Sub-indicators 1 PC per 100 students||Sub-indicator 2 Internet bandwidth (Mb/s)per 1,000 students|
|1||< 5%||< 0.5|
|2||5 - 19%||0.5 - 2|
|3||20 - 49.9%||2 - 4|
|4||>= 50%||> 4|
The data for staging the questionnaires was obtained either from the hard facts questionnaires or the perception questionnaires originally developed in the 2006 survey but modified slightly in 2008 and 2013 for clarity and ease of data collection.
The 2008 survey recommended that five critical e-readiness sub-indicators be incorporated in corporate and ICT strategic plans of universities. This means that the Vice Chancellors or senior management would track the five sub-indicators. In the 2013 survey, the integration of ICT in curricula as reported by DVC for academic affairs was replaced with the sub-indicator of the number of students who had reported to have taken a few blended or fully online courses in the past academic year. The critical sub-indicators also included the percentage of students who owned laptops because this affected the mode of learning adopted and reduced or increased the demand for university student labs.
The five critical sub-indicators for the 2013 survey were:
- Internet bandwidth cost per 1,000 students
- Internet bandwidth per 1,000 students
- PCs per 100 students
- Estimated percentage of students who owned laptops
- Percentage of students who took all or nearly all blended courses
This data could be collected regularly from the institutional learning management system, the institutional ERPs that tracks the blended or online courses offered by the universities or the authorization and authentication database for wireless network users. In the 2013 survey, the data on percentage of students who owned laptops or had taken blended courses was obtained from the perception survey of students. Universities at different stages of readiness could also select an even smaller sub-set of the 88 sub-indicators as part of the annual monitoring and evaluation of the implementation of their institutional strategic plans.