Reflections on the #oerrub agile approach to evaluation

image of reports
Dull but worthy . . .

(image: http://pixabay.com/en/notes-office-pages-papers-print-150587/)

I’ve already posted some reflections on the agile approach that the OER Research Hub has been developing. In this post I’m going to try and share some of my reflections on my role as an evaluation consultant to to project and the agile or flexible approach we have developed to my input and (open) outputs.

Evaluation should be a key part of any research project, built in from the start and not something that is just left until the end of a project. However,sometimes it can slip off “the list”. As well as evaluating actually research outputs, it is also important to evaluate the processes that a project has used. In the case of the OER Research Hub, evaluation has been build in from the start, and their own, open, evaluation framework details their evaluation approach along with a pretty comprehensive overview of project evaluation.

Having this framework has made my role as an evaluator much easier. I had a very clear starting point with specific questions developed by the team which were driven by the overarching aims and objectives of the project. The framework guided me in my exploration of the project and focused my discussions with the team.

However, the framework is just that, a framework. It doesn’t “do” the evaluation. One of the things I have really enjoyed about my role with the project has been the flexibility, agility and open-ness of the team in terms of my input and in turn outputs.

Last year I worked with the project as they approached the end of their first year of funding. At that stage the project was still in the early days of its collaboration developments and data collection and so my main focus a review of the work so far, and to work with the team in terms of dissemination planning for the remainder of the project. I was also actively encouraged (in fact it was in my contract) produce blog posts as outputs. This is, I think still fairly unusual for evaluation activity, but it fits well both for research project about OER and open education, and my own open practice.

Other outputs from me included what I called my “brain dump” of my initial reactions and thoughts on the project outputs so far, some SWOT analyses, and a “dull but worthy” summary report. These were shared only with the team.

Even in open research not everything can or in many cases should be open, particularly if, as last year, the evaluation is focusing more on the mechanics of the project rather than the outputs themselves. I am a firm believer in making things open, but that what “stuff” you decide to make open is useful. Some of my outputs were only of use to the team at that particular time. However, the sharing the overall approach in a open way via this post is probably a more appropriate, open and (hopefully) useful resource for others.

This year my role has evolved again to more of what I would call a more of a critical friend. The project funders, the Hewlett Foundation are conducting their own evaluation of the project, so I have been working with the team in reviewing their outputs in relation to the focus of that evaluation. As with last year there has been some flexibility in terms of my input and outputs, but again blog posts have been part of the contract. This year I have spent most of my time meeting and talking with the team. I have seen my role more about encouraging reflection and talking through the teams next steps in relation to their data, findings, dissemination and sustainability.

It’s the latter where I think the real challenges lie. I don’t want to steal the thunder from the project, but they have got some pretty good evidence on the impact of OER (emerging findings are already being shared via their infographics page and blog posts). Their OER impact map is already providing an innovative and meaningful way to search and explore their data. But what next? How will the work and findings be built on both in the OU and the wider (open) education community? Will this project provide a secure foundation for an emerging research community?

These questions are key not only for the project, but also for their funders. The Hewlett Foundation have spent a lot (over $100 million) on OER over the past decade, so what is next for them? In terms of mainstreaming OER has the battle really been won? Martin and I have slightly different opinions on this. The project research is showing some really strong evidence in a number of areas in terms of winning/impact. But we are still only scratching at the surface and most of the research is pretty much North American focussed. Some of the models and evidence, particularly around text books, doesn’t have as much relevance in other parts of the world. More global research is clearly needed and is very positive to see the collaborations the project has developed with organisations such as ROER4D.

Building a new research community and discipline take time. However having a research element built into projects could provide additional stimulus, security and as well as short and long term sustainability. Is the future of the OER Research Hub as a set of static tools and guidance, or something more organic that provides a focus not only in supporting to grow a research community, but also in aggregating up evidence and sharing wider trends back to the community? In parallel with the continuum of reuse of OER highlighted, surely there needs to be a continuum of research.

Again I will be producing another “dull but worthy” report along side my blog posts, but if you want to join a wider conversation about open reflection and evaluation have a look at the current Open Researcher Course. There is a week of activities dedicated to the area, including a couple of good overview videos from Leigh Anne Perryman who also wrote the OER Research Hub Evaluation framework.

3 thoughts on “Reflections on the #oerrub agile approach to evaluation”

  1. Spotted an error: where it says “Martin and I have slightly different opinions” it should say “Martin and I have slightly different opinions (but he’s right)” 😉
    Thanks for an excellent post

Leave a Reply to mjw Cancel reply

Your email address will not be published. Required fields are marked *

css.php