Reflections on the #oerrub agile approach to evaluation

image of reports
Dull but worthy . . .

(image: http://pixabay.com/en/notes-office-pages-papers-print-150587/)

I’ve already posted some reflections on the agile approach that the OER Research Hub has been developing. In this post I’m going to try and share some of my reflections on my role as an evaluation consultant to to project and the agile or flexible approach we have developed to my input and (open) outputs.

Evaluation should be a key part of any research project, built in from the start and not something that is just left until the end of a project. However,sometimes it can slip off “the list”. As well as evaluating actually research outputs, it is also important to evaluate the processes that a project has used. In the case of the OER Research Hub, evaluation has been build in from the start, and their own, open, evaluation framework details their evaluation approach along with a pretty comprehensive overview of project evaluation.

Having this framework has made my role as an evaluator much easier. I had a very clear starting point with specific questions developed by the team which were driven by the overarching aims and objectives of the project. The framework guided me in my exploration of the project and focused my discussions with the team.

However, the framework is just that, a framework. It doesn’t “do” the evaluation. One of the things I have really enjoyed about my role with the project has been the flexibility, agility and open-ness of the team in terms of my input and in turn outputs.

Last year I worked with the project as they approached the end of their first year of funding. At that stage the project was still in the early days of its collaboration developments and data collection and so my main focus a review of the work so far, and to work with the team in terms of dissemination planning for the remainder of the project. I was also actively encouraged (in fact it was in my contract) produce blog posts as outputs. This is, I think still fairly unusual for evaluation activity, but it fits well both for research project about OER and open education, and my own open practice.

Other outputs from me included what I called my “brain dump” of my initial reactions and thoughts on the project outputs so far, some SWOT analyses, and a “dull but worthy” summary report. These were shared only with the team.

Even in open research not everything can or in many cases should be open, particularly if, as last year, the evaluation is focusing more on the mechanics of the project rather than the outputs themselves. I am a firm believer in making things open, but that what “stuff” you decide to make open is useful. Some of my outputs were only of use to the team at that particular time. However, the sharing the overall approach in a open way via this post is probably a more appropriate, open and (hopefully) useful resource for others.

This year my role has evolved again to more of what I would call a more of a critical friend. The project funders, the Hewlett Foundation are conducting their own evaluation of the project, so I have been working with the team in reviewing their outputs in relation to the focus of that evaluation. As with last year there has been some flexibility in terms of my input and outputs, but again blog posts have been part of the contract. This year I have spent most of my time meeting and talking with the team. I have seen my role more about encouraging reflection and talking through the teams next steps in relation to their data, findings, dissemination and sustainability.

It’s the latter where I think the real challenges lie. I don’t want to steal the thunder from the project, but they have got some pretty good evidence on the impact of OER (emerging findings are already being shared via their infographics page and blog posts). Their OER impact map is already providing an innovative and meaningful way to search and explore their data. But what next? How will the work and findings be built on both in the OU and the wider (open) education community? Will this project provide a secure foundation for an emerging research community?

These questions are key not only for the project, but also for their funders. The Hewlett Foundation have spent a lot (over $100 million) on OER over the past decade, so what is next for them? In terms of mainstreaming OER has the battle really been won? Martin and I have slightly different opinions on this. The project research is showing some really strong evidence in a number of areas in terms of winning/impact. But we are still only scratching at the surface and most of the research is pretty much North American focussed. Some of the models and evidence, particularly around text books, doesn’t have as much relevance in other parts of the world. More global research is clearly needed and is very positive to see the collaborations the project has developed with organisations such as ROER4D.

Building a new research community and discipline take time. However having a research element built into projects could provide additional stimulus, security and as well as short and long term sustainability. Is the future of the OER Research Hub as a set of static tools and guidance, or something more organic that provides a focus not only in supporting to grow a research community, but also in aggregating up evidence and sharing wider trends back to the community? In parallel with the continuum of reuse of OER highlighted, surely there needs to be a continuum of research.

Again I will be producing another “dull but worthy” report along side my blog posts, but if you want to join a wider conversation about open reflection and evaluation have a look at the current Open Researcher Course. There is a week of activities dedicated to the area, including a couple of good overview videos from Leigh Anne Perryman who also wrote the OER Research Hub Evaluation framework.

Where Sheila's been this week – revisiting #OERRHub and the researcher as an API analogy

I spent the early part of this week in Milton Keynes with the OER Research Hub team as part of the second phase of the project evaluation.

When I worked with the team last year one of the things that intrigued me about the project was the fact that they were planning to apply and adapt an agile programming  approach to the project.

As I pointed out then, I felt there could be challenges with this as typically the outputs from research projects aren’t as concrete as most software development products, but I could see the attraction of this approach.

Bringing researchers who form part of a globally distributed team together for set periods to focus on certain aspects of research project does make sense. As does having some kind of structure, particularly for focusing “group minds” on potential outputs (products), adaptation of peer programming could be useful for peer review etc. However implementing “proper” agile programming methodology to research is problematic.

But if we stick with the programming analogy and stop thinking in terms of products, and start thinking of research as a service (akin to software as a service) then maybe there is more milage. A key part of SaS approaches are APIs, allowing hooks into all sorts of sites/ services so that they can in effect talk to each other.

The key thing therefore is for the researcher to think of themselves more as the interface between their work, the data, the findings, the “what actually happened in the classroom” bits and focus on ways to allow as wide a range of stakeholders to easily “hook” into them so they can use the outputs meaningfully in their own context.

In many ways this is actually the basis of effective digital scholarship in any discipline and of course what many researchers already do.

A year on, and after experiencing one of the early project sprints how has it worked out?

Well everyone knew that the project wouldn’t be following a strict agile methodology, however key aspects, such as the research sprints have proved to be very effective. Particularly in focusing the team on outputs.

The sprints have allowed the overall project management to be more agile and flexible. They have brought focus and helped the team as a whole stay on track but also refocus activity in light of the challenges (staff changes, delays to getting surveys started etc) that any research project has to deal with. As this is very much a global research project, the team have spent large chunks of time on research visits, going to conferences etc so when they are “back at the ranch” it has been crucial that they have a mechanism not only to report back and update their own activities but also to ensure that everyone is on track in terms of the project as a whole.

The sprints themselves haven’t been easy, and have required a lot of planning and management. The researchers themselves admit to often feeling resentment at having to take a week out of “doing work” to participate in sprints. However, there is now an acknowledgement that they have been central to ensure that the project as a whole stays on track and that deliverables are delivered.

I was struck this week by how naturally the team talked about the focus of their next sprint and how comfortable and perhaps more importantly confident they were about what was achievable. It’s not been easy but I think the development, and the sustaining of the research sprint approach over the project lifespan has paid dividends.

Returning to the wider API issue, last year I wrote

I wonder if the research as API analogy could help focus development of sharing research outputs and developing really effective interactions with research data and findings?

Again, one year can I answer my own question? Well, I think I can. From discussions with the team it is clear that human relationships have been key in developing both the planned and unexpected collaborations that the project has been undertaking. At the outset of the project a number of key communities/agencies were identified as potential collaborations. Some to these collaborators had a clear idea of the research they needed, others not so much. In every case as the research team have indeed been acting as “hooks” into the project and overall data collection strategy.

These human relationships have been crucial in focusing data collection and forging very positive and trusted relationships between the Hub and its collaborators. Having these strong relationships is vital for any future research and indeed, a number of the collaborations have extended their own research focus and are looking to work with the individual team members on new projects. As findings are coming through, the Hub are helping to stimulate more research into the impact of OER and support an emerging research community.

One of the initial premises for the project was the lack of high quality research into the impact of OER, they are not only filling that gap, but now also working with the community to extend the research. Their current Open Research course is another example of the project providing more hooks into their research, tools and data for the wider community.

The project is now entering a new phase, where it is in many ways transitioning from a focus on collecting the data, to now sharing the data and their findings. They are now actually becoming a research hub, as opposed to being a project talking about how they are going to be a hub. In this phase the open API analogy (imho) can only get stronger. If it doesn’t then everyone loses, not just the project, but the wider open education community.

The project does have some compelling evidence of the impact of using OER on both educators and learners (data spoiler alert: some of the differences between these groups may surprise you), potential viable business models for OER, and some of the challenges, particularly around encouraging people to create and share back their own OERs. For me this is particularly exciting as the project has some “proper” evidence , as opposed to anecdotes, showing the cultural impact OER is having on educational practice.

In terms of data, the OER Impact Map, is key hook into the visualizing and exploring the data the project has been collecting and curating. Another phase of development is about to get under way to provide even more ways to explore the data. The team are also now planning the how/where/when of releasing their data set.

The team are the human face of the data, and their explanations of the data will be key to the overall success of the project over the coming months.

More thoughts to come from me on the project as a whole, my role and agile evaluation in my next post.

Where Sheila's been this week – research sprinting with #oerrub

 A couple of weeks ago I wrote about the agile approach the OER Research Hub team at the Open University are adopting, and in particular the use of sprints.  This week I’ve been in Milton Keynes with the team for their latest sprint week and have had the opportunity to experience first hand a research sprint. 

As I said in my previous post, the notion of a research sprint has intrigued me. Could this software development technique really be adapted and more importantly be effective in a research context?  Well, it seems it can.  As part of my Evaluation Fellow role I was interviewing all of the team during the week and there was unanimous agreement of the value of the sprint. Particularly in (re) focusing attention on key project level deliverables and sharing of findings within the team.

The project is taking an active collaboration approach and this week the core team were joined by three of the project fellows (Kari Arfstrom from the Flipped Learning Network, Thanh Le from Vital Signs/ Gulf of Maine and Daniel Williamson from Connexions/OpenStax).  It’s probably too early to say just exactly what the fellows think of the sprint approach – they are probably just over jet-lag today! But it certainly seemed like a perfect way to quickly focus attention on activity and give a wider perspective of what is going on in within this quite complex project and establish strong connections within the core team at the OU.

I have to confess I don’t have a lot of personal experience of being in a sprint, but I did participate in a couple of scrums for a software project a couple of years ago, and the experience this week was very similar.  Profs and project leads Martin Weller and Patrick McAndrew take the role of product owners and, with the project co-ordinator Claire Walker, had devised a list of tasks/products. These were prioritized and allocated by the whole team on Monday afternoon using a combination of voting and post it notes. 

Each morning this week a short scrum meeting is taking place where everyone shares what they have done and what they are going to do for that day which is directly related to the agreed tasks.  Standing and the catching/throwing of a small smartie box plays a vital role in keeping the meetings running smoothly and to time.  Shared google docs with task lists are also keeping track of progress.  The team are also keeping a shared reflective diary of the week. It’s not appropriate for me to share any information about this, but I do think that shared reflection is vital when participating in a relatively new way of working – not least just to ensure that lessons learned are shared and (hopefully) incorporated into future projects.  As the project has four distinct areas of research, I found the sprint reminiscent of a programme meeting in bringing a set of smaller projects together and focusing activity on key areas. 

Although I had to leave half way through the week there was a palpable sense of things getting done and that by the end of the week the project will have a number of deliverables ready to go and a clear focus for others over the coming months.  This includes a series of webinars starting next month where Rob Farrow will take the lead around OER and policy changes at institutional level. 

I’m not sure if it actually makes a difference by I did particularly like spending most of my working week in a “superpod”.  As you can see from the picture below, you too could quite easily convert an office into a superpod too 🙂

 

Image

 

 

 

 

The OER Research Hub: Revving up OER research

Open and education, they go hand in hand, a bit like bread and butter or fish and chips. For over a decade, the open education movement has been steadily making inroads into the collective conscious.  Through various global initiatives there is increasing evidence to illustrate that there is more than “just a feeling” that OER and open educational practice can have an impact on teaching and learning. 

Building in particular on the work of the OpenLearn, Bridge to Success and OLnet   projects, and other developments in the wider open education movement, the OER Research Hub is focused on gathering evidence around the positive impact of OER, and open practice in teaching and learning.

Funded by the William and Flora Hewlitt Foundation, the project provides: 

a focus for research, designed to give answers to the overall question ‘What is the impact of OER on learning and teaching practices?’ and identify the particular influence of openness. (http://oerresearchhub.org/about/)

As the project moves towards the end of its first year of funding, I’m working with the team to evaluate their overall approaches, methodologies, findings, outputs and dissemination. So, I have spent some time over the last couple of weeks immersing myself in the world of the OER Research Hub and familiarising myself with the complexities of fully understanding an evolving project with a number of different research activities and contributors. 

The overarching research question forms two key hypothesis as the central tenant for the projects’ research activities:

  • Use of OER leads to improvement in student performance and satisfaction.
  • The open aspect of OER creates different usage and adoption patterns than other online resources.

These “big” hypothesis have been further broken down into a subset of testable hypotheses:

  • Open education models lead to more equitable access to education, serving a broader base of learners than traditional education.
  • Use of OER is an effective method for improving retention for at-risk students.
  • Use of OER leads to critical reflection by educators, with evidence of improvement in their practice.
  • OER adoption at an institutional level leads to financial benefits for students and/or institutions.
  • Informal learners use a variety of indicators when selecting OER.
  • Informal learners adopt a variety of techniques to compensate for the lack of formal support, whichcan be supported in open courses.
  • Open education acts as a bridge to formal education, and is complementary, not competitive, with it.
  • Participation in OER pilots and programs leads to policy change at institutional level.
  • Informal means of assessment are motivators to learning with OER.

Using a collaborative research approach, the core research team is working with a number of established projects and is further complemented by a number of open research fellowships. Each project/ fellow is investigating a combination of the hypothesis.  In this way the project covers four major educational sectors (Higher Education, schools, informal learning and community colleges) as the diagram below illustrates.

Image
(image from What makes openness work presentation, http://oerresearchhub.org/2013/07/17/what-makes-openness-work/)

Last month the team gave an overview  presentation of the project to colleagues at the Open University. The recording and slides provide an excellent overview of the projects’ activities to date.  Some more detailed reflections on the initial findings are included in this post by Leigh Anne Perryman.  

The team have also begun to identify the some of the key challenges they need to address in next year:

*Educators are more positive about the impact of OER on performance & satisfaction than students (across OpenLearn & Flipped Learning).
*Open Education Models don’t necessarily improve access to education.
*Students using OER textbooks may save up to 80% of costs.
*Informal Learner Experience survey suggests that CC licensing is less important than previously thought.
*There is survey evidence the OER (esp. OpenLearn) are being used to prepare and support formal study.
*Examples of OER policies emerging for practice are becoming more common (UMUC, Utah Textbooks, Foothil-De Anza CC).

As well as these headline challenges, there is also the underlying challenge of ensuring that the research and various outputs from the first phase of the project are being disseminated effectively.  How can the team ensure that their growing evidence, reflection, outputs is reaching not just the OER/open education community but the wider teaching and learning community? What other methodologies can be incorporated into their data collection and sharing? What are the key lessons from the “agile research”  approach the project is taking? How are they refining/adapting/reacting to this approach?  What lessons can they share from it?  And most importantly, how can the hypothesis and their findings be made immediate and valuable to all of the projects’ stakeholders? Which is where I come in 🙂

Over the coming weeks I’ll be working with the team to as they prepare for their next phase and I’ll be sharing some of the approaches to answering the questions above both here and via the OER Research Hub

css.php