Learning Analytics for Assessment and Feedback Webinar, 15 May

**update 16 May**
Link to session recording

Later this week I’ll be chairing a (free) webinar on Learning Analytics for Assessment and Feeback. Featuring work from three projects in the current Jisc Assessment and Feedback Programme. I’m really looking forward to hearing first hand about the different approaches being developed across the programme.

“The concept of learning analytics is gaining traction in education as an approach to using learner data to gain insights into different trends and patterns but also to inform timely and appropriate support interventions. This webinar will explore a number of different approaches to integrating learning analytics into the context of assessment and feedback design; from overall assessment patterns and VLE usage in an institution, to creating student facing workshops, to developing principles for dashboards.”

The presentations will feature current thinking and approaches from teams from the following projects:
*TRAFFIC, Manchester Metropolitan University
*EBEAM, University of Huddersfield,
*iTeam, University of Hertfordshire

The webinar takes place Wednesday 15 May at 1pm (UK time) and is free to attend. A recording will also be available after the session. You can register by following this link.

eAssessment Scotland – focus on feedback

Professor David Boud got this year’s eAssessment Scotland Conference off to a great start with his “new conceptions of feedback and how they might be put into practice” keynote presentation by asking the fundamental question ‘”what is feedback?”

David’s talk centred on what he referred to as the “three generations of feedback”, and was a persuasive call to arms to educators to move from the “single loop ” or “control system” industrial model of feedback to a more open adaptive system where learners play a central and active role.

In this model, the role of feedback changes from being passive to one which helps to develop students allowing them to develop their own judgement, standards and criteria. Capabilities which are key to success outside formal education too. The next stage from this is to create feedback loops which are pedagogically driven and considered from the start of any course design process. Feedback becomes part of the whole learning experience and not just something vaguely related to assessment.

In terms of technology, David did give a familiar warning that we shouldn’t enable digital systems to allow us to do more “bad feedback more efficiently”. There is a growing body of research around developing the types of feedback loops David was referring to. Indeed the current JISC Assessment and Feedback Programme is looking at exactly the issues brought up in the keynote, and is based on the outcomes of previously funded projects such as REAP and PEER. And the presentation from the interACT project I went to immediately after the keynote, gave an excellent overview of how JISC funding is allowing the Centre for Medical Education in Dundee to re-engineering its assessment and feedback systems to “improve self, peer and tutor dialogic feedback”.

During the presentation the team illustrated the changes to their assessment /curriculum design using an assessment time line model developed as part of another JISC funded project, ESCAPE, by Mark Russell and colleagues at the University of Hertfordshire.

Lisa Gray, programme manager for the Assessment and Feedback programme, then gave an overview of the programme including a summary of the baseline synthesis report which gives a really useful summary of the issues the projects (and the rest of the sector ) are facing in terms of changing attitudes, policy and practice in relation to assessment and feedback. These include:
*formal strategy/policy documents lagging behind current development
*educational principles are rarely enshrined in strategy/policylearners are not often actively enaged in developing practice
*assessment and feedback practice doesn’t reflect the reality of working life
*admin staff are often left out of the dialogue
*traditional forms of assessment still dominate
*timeliness of feedback are still an issue.

More information on the programme and JISCs work in the assessment domain is available here.

During the lunch break I was press-ganged/invited to take part in the live edutalk radio show being broadcast during the conference. I was fortunate to be part of a conversation with Colin Maxwell (@camaxwell), lecturer at Carnegie College, where we discussed MOOCs (see Colin’s conference presentation) and feedback. As the discussion progressed we talked about the different levels of feedback in MOOCs. Given the “massive” element of MOOCs how and where does effective feedback and engagement take place? What are the afordances of formal and informal feedback? As I found during my recent experience with the #moocmooc course, social networks (and in particular twitter) can be equally heartening and disheartening.

I’ve also been thinking more about the subsequent twitter analysis Martin has done of the #moocmooc twitter archive. On the one hand, I think these network maps of twitter conversations are fascinating and allow the surfacing of conversations, potential feedback opportunities etc. But, on the other, they only surface the loudest participants – who are probably the most engaged, self directed etc. What about the quiet participants, the lost souls, the ones most likely to drop out? In a massive course, does anyone really care?

Recent reports of plagiarism, and failed attempts at peer assessment in some MOOCs have added to the debate about the effectiveness of MOOCs. But going back to David Boud’s keynote, isn’t this because some courses are taking his feedback mark 1, industrial model, and trying to pass it off as feedback mark 2 without actually explaining and engaging with students from the start of the course, and really thinking through the actual implications of thousands of globally distributed students marking each others work?

All in all it was a very though provoking day, with two other excellent keynotes from Russell Stannard sharing his experiences of using screen capture to provide feedback, and Cristina Costa on her experiences of network feedback and feeding forward. You can catch up on all the presentations and join in the online conference which is running for the rest of this week at the conference website.

Enhancing engagement, feedback and performance webinar

The latest webinar from the JISC Assessment and Feedback programme will take place on 23 July (1-2pm) and will feature the SGC4L (Student Generated Content for Learning) project. Showcasing the Peerwise online environment the project team will illustrate to participants how it can be used by students to generate their own original assessment content in the form of multiple choice questions. The team will discuss their recent experiences using the system to support teaching on courses at the University of Edinburgh and the findings of the project. The webinar will include an interactive session offering participants the opportunity get first hand experience of interacting with others via a PeerWise course set up for the session.

For further details and links to register for this free webinar are available by following this link.

Binding explained . . . in a little over 140 characters

Finding common understandings is a perennial issue for those of us working in educational technology and lack of understanding between techies and non techies is something we all struggle with. My telling some of the developers I used to work with the difference between formative and summative assessments became something of an almost daily running joke. Of course it works the other way round too and yesterday I was taken back to the days when I first came into contact with the standards world and its terminology, and in particular ‘bindings’.

I admit that for a while I really didn’t have a scoobie about bindings, what they were, what the did etc. Best practice documentation I could get my head around, and I would generally “get” an information model – but bindings, well that’s serious techie stuff and I will admit to nodding a lot whilst conversations took place around me about these mysterious “bindings”. However I did eventually get my head around them and what their purpose is.

Yesterday I took part in a catch up call with the Traffic project at MMU (part of the current JISC Assessment and Feedback programme). Part of the call involved the team giving an update on the system integrations they are developing, particularly around passing marks between their student record system and their VLE, and the development of bindings between systems came up. After the call I noticed this exchange on twitter between team members Rachel Forsyth and Mark Stubbs.

I just felt this was worth sharing as it might help others get a better understanding of another piece of technical jargon in context.

Couple of updates from the JISC Assessment and Feedback Programme

As conference season is upon us projects from the current JISC Assessment and Feedback programme are busy presenting their work up and down the country. Ros Smith has written an excellent summary post “Assessment and Feedback: Where are we now?” from the recent International Blended Learning Conference where five projects presented.

Next week sees the CCA Conference and there is a pre-conference workshop where some of the assessment related standards work being funded by JISC will be shared. The workshop will include introductions to:

• A user-friendly editor called Uniqurate, which produces questions conforming to the Question and Test Interoperability specification, QTIv2.1,

• A way of connecting popular VLEs to assessment delivery applications which display QTIv2.1 questions and tests – this connector itself conforms to the Learning Tools Interoperability specification, LTI,

• A simple renderer, which can deliver basic QTIv2.1 questions and tests,

• An updated version of the comprehensive renderer, which can deliver QTIv2.1 questions and tests and also has the capability to handle mathematical expressions.

There will also be demonstrations of the features of the QTI Support site, to help users to get started with QTI.

The workshop will also provide an opportunity to discuss participants’ assessment needs and to look at the ways these might be addressed using the applications we have available and potential developments which could be part of future projects.

If you are interested in attending the conference, email Sue Milne sue.milne@e-learning-services.org.uk with your details as soon as possible.

Design Studio update: Transforming Assessment and Feedback

Those of you who regularly read this blog, will (hopefully) have noticed lots of mentions and links to the Design Studio. Originally built as a place to share outputs from the JISC Curriculum Design and Delivery Programmes, it is now being extended to include ouputs from a number of other JISC funded programmes.

The Transforming Assessment and Feedback area of the Design Studio now has a series of pages which form a hub for existing and emergent work on assessment and feedback of significant interest. Under a series of themes, you can explore what this community currently know about enhancing assessment and feedback practice with technology, find links to resources and keep up to date with outputs from the Assessment and Feedback and other current JISC programmes.

Assessment and Feedback themes/issues wordle

This is a dynamic set of resources that will be updated as the programme progresses. Follow this link to explore more.

css.php