UKOER evaluation toolkit: evaluating OERs

This page links to some resources that may be useful when planning and carrying out the evaluation of your OER project. We expect that the latest version of the Synthesis and Evaluation Framework and the advice of individual evaluators will be the main resource for most teams.

Evaluating OERs

These resources may be helpful when evaluating the processes of open release (though see our evaluation framework) and/or the quality of OERs.

Learner-centred evaluation

These resources may be helpful when evaluating OERs in use by learners

  • Phil Barker of CETIS has produced a presentation and facilitated an elluminate session on the use of tracking data to monitor resource usage This kind of data can be triangulated with (e.g.) surveys, interviews, focus groups, guided elicitation or observation of learners to explore their use of OERs in more detail.
  • The JISC Learners’ Experience of e-Learning wiki has a range of resources for researchers and evaluators at These include a section on methods which has downloadable ‘recipe cards’ for conducting specific data collection or analysis techniques.
  • Helen produced a guideline on learner centred evaluation in 2007 which includes some tables and checklists that might be relevant to projects which are focusing on learners as stakeholders in OER release and use. learner_centrd_evaluation_guide.doc

Note that both these resources assume a fairly strong focus on learners as stakeholders, and sufficient resources available to engage learners meaningfully in the evaluation process.

Institutional transformation

An important outcome of the OER pilot programme is an analysis of features of institutions and other communities which support OER release. These can then be used to assess, for example, progress towards more open content management, more OER-friendly legal and technical processes, or more open pedagogies. We should not therefore expect to find any off-the-peg evaluation tools already in existence. The generic framework records what issues are emerging as crucial in evaluating openness at institutional level.

However, the following resources may be useful when evaluating institutional change in a general sense, associated with project activities (NB they will all need to be adapted to purpose).

  • Some of the tools and techniques described on the JISC infonet site are relevant to baselining and assessing institutional issues
  • The e-learning benchmarking programme run by the HE Academy used a range of tools to benchmark institutional approaches to e-learning, some of which included indicators relating to content management. (scroll through the right hand column to find methodologies)
  • The SHEFC Transformation Programme evaluated the impact of specific e-learning initiatives on a range of Scottish HE and FE institutions: some of the methods used are documented on the TESEP web site and in the methods section of the Glenaffric evaluation report
  • Becta has recently launched a ‘scaling framework’ to support scaling up innovations. The main target is the schools sector and it is a ‘how to’ rather than an evaluation framework, but the dimensions of depth, sustainability, spread, shift, and evolution could all be applied to evaluating changed processes around OERs. – and don’t be put off by the flash intro.

 General Evaluation resources

Using video for evaluation 

Tools and tips

Back to Evaluation Toolkit: Evaluation resources