My short and long “must reads” around ChatGPT and LLM

Undoubtedly LLM (large language models), and in particular ChatGPT, is the hot topic in education right now. David Hopkins has helpfully started and shared a flipgrid where he is sharing articles around generative AI, and I know many others are doing the same. Amongst the hype there are thankfully a growing body of people who are writing informed critiques. In this post I just want to quickly highlight a couple of publications that I think are a must read.

Firstly the UNESCO Quick Start Guide to ChatGPT and Artificial Intelligence. This is provides a really good overview of issues including a useful flow chart to help decisions around using ChatGPT, applications for education and some of the current issues. I suspect this will become a “go to” resource. It’s something that all educators should read.

And once they’ve done that then I have to recommend 2 longer pieces by Helen Beetham. Firstly, “on language, language models and writing“. In this essay, Helen really gets to grips with a key issue that is missing in many of the articles about LLM and ChatGPT, that is what is the purpose of writing? Why do we do it? It’s not just about structuring of text, personal reading. I think most people (well at least you, dear reader) does now understand that these language models work on prediction, and have no sense of context. So although the text may read well, it will often lack purpose and understanding. As Helen points out ” Writing by human writers is not only about the world, it is of the world and accountable in it.”

She goes on to explore some of the potential benefits of using systems such as ChatGPT. Can they be seen as writing partners? We supply the prompts, they supply the text . . ? I was struck by this.

The illusion that these are more than tools or interfaces – that they are our partners in language, our interlocutors. We already spend large parts of our lives engaged in vivid graphical and sensory illusions. We should count the costs and benefits before rushing into a life of dialogue with illusory others

And this

Students see writing as a diverse, messy, inexact, variously motivated practice they are developing for themselves. Then perhaps they can aspire to be a writer among writers, and not a human version of ChatGPT.

I thank Helen for being the writer she is to have come up with that last turn of phrase. And then she goes on to point out:

But tools are not neutral. Just as language is not ‘simply’ the words we use to express our meanings to other people, tools are not ‘simply’ the means we use for exercising our personal intentions in the world. Tools carry the history of how they were designed and made. They shape practices and contexts and possible futures. . . With so many other tools we can use creatively, we must surely weigh the risks against the creative possibilities.”

In terms of education Helen also raises some really valid points for strategic leadership in universities. It does seem an awful lot of responsibility is being heaped on students, maybe we need to be asking these questions

While students are held stringently to account for their use of LLMs, how will universities account to students for their own use of these systems? Can they hold out against black-box capabilities being embedded into the platforms they have come to depend on? Who is assessing the risks, and how are those risk assessments and mitigations being shared with the people most affected? These are questions that universities should be attending to with at least as much energy as they are policing students’ use of apps.”

There is also an accompanying piece students assignments in a time of language modelling. Again this is a really thoughtful (and pragmatic) piece about why, how and when to use writing tasks in assessments.

I would thoroughly recommend reading both essays, and engaging with Helen’s writing over on substack.


Pedagogy, place and pragmatics

Following on from the report that has just been published on Approaches to Curriculum and Learning Design in the UK HE sector, Helen Beetham and I are exploring some of the key issues that were highlighted through the survey and the interviews we conducted. Central to this are issues around time, space and place. Earlier this week we were able to start to share some of our initial thinking during a workshop at the Jisc Student Experience Experts Meeting.

In the interviews I conducted as part of the project, there was a general consensus that after the first lockdown most organisations were quite keen, even quite ambitious about their future plans for new approaches to learning and teaching. There was a sense of an appetite to embrace some the changes to practice that being forced off campus had brought about. Assessment was a huge part of that.

Rapid changes to assessments had to be introduced, along with rapid changes to assessment regulations. Student care was high on the agenda – a visible sign of that was the no detriment practices that many adopted. Again in the interviews, it was clear that lots of the changes from f2f exams to online submissions of various types including open book, authentic assessments have now been adopted.

In terms of wider curriculum change, it was also clear from the survey responses and interviews that the appetite for changes to other aspects of curriculum design and delivery had been divisively impacted by the UK Governments’ insistence that everyone needed to be back on campus, at lectures and doing “proper” in person exams. Never mind the lessons that had been learnt from students about the benefits of more flexible, accessible and inclusive approaches. Strategic statements were subtly altered to reflect as a pragmatic response to that political driver.

However, back in the real world, we can’t ignore that our understandings and use of the spaces, places (both physical and digital) and times for learning and teaching have been altered by the pandemic experience. Students have been off campus, on campus, off campus, on and off campus for a bit . . . and now on campus. Typical 1st and 2nd year students have had their final years of school turned upside down in the same way.

I think how “be” a student has changed, and that might be one of the reasons there have been so many issues around engagement. Where (and when) you actually need to be isn’t as clear cut as it was in the “before times”.

Going back to assessment, some of the comments student interns on the Irish EDTL project made during one of their webinars really struck me. Including the student who very eloquently shared how being able to take assessments off campus, in a space that was comfortable for them, massively reduced their stress levels; another who felt that the design of some of the online MCQs exams they had taken were “mean” as they didn’t allow you to go back to a question to answer it. That experience was making them want almost long for pen and paper exams. In the panel discussion at the experts meeting, Deborah Longworth from the University of Birmingham shared how some changes to assessment are now having impact on the mental health of students. She described how some students can think that a 72 hour open book exam means that they need to be working on it for 72 hours. Does this mean taking time to develop more scaffolding around time expectations, or is it an “in” to go back to fixed, in person exam that everyone understands the conventions of ?

Whilst terms such hybrid and hyflex are commonly used and, are they really fully understood by both students and staff? Do we really have effective examples of how these approaches work in practice. This is one area Helen and I want to explore from a pedagogical lens.

We are starting with time, and thinking in terms of synchronous and asynchronous. Then considering what types of activities/interactions that work best in these contexts, and then starting to map the spaces and places that students and staff need to be in as these activities are instantiated. In terms of broadening our approaches to learning design, do we need to be more explicit about time, space and place expectations in?

As the cost of living crisis starts to really kick in, what additional changes do we need/ are we making to make to our physical estate to support our students (and staff). Warm areas, areas with kettles? What choices might commuting students have to make about how many times a week they can be on campus?

As we discussed these issues in the meeting, a dose of pragmatism was injected into the conversation. Whilst it is often said that pedagogy should always come before technology, in reality it’s pragmatism, and the contextual constraints that everyone has to work with that really make have “the power”. Pragmatics always win over everything else.

I know I have run many learning design workshops where some really innovative approaches have been planned, only to find out that 2 weeks before the start of term, the plans have been changed because of timetabling issues or more commonly not enough staff resource or time.

As the sector moves forward is it just easier to cope with increases in student numbers, and the staff/studio ratio to just timetable in lectures? Is it just pragmatically more effective not to change workload models and notions of contact time to reflect the shifts in preparation/contact time and presence needed, and stick with the conventions we are all familiar and comfortable with?

Hopefully not, and that’s what we are working on now, to develop resources that can help provide guidance and exemplars of how the sector can, and is, evolving to allow us to think about pedagogy and place and hopefully start to change some of the pragmatics and constraints approaches to learning design, and in turn the student experience, exist in. I know Peter Bryant’s recent post on the “snapback” discusses many of these issues in more depth so is worth a look if you haven’t seen it yet.

So if you have any thoughts on this, or would like to share any examples, please do get in touch, or leave a comment. We want to provide spaces to have these conversations and hopefully provide some resource to help others have them in their contexts.

Teaching in Higher Ed podcast: Time, space and place

A couple of weeks ago I was delighted to spent a really lovely hour or so chatting with Bonni Stachowiak as part of her amazing Teaching in Higher Ed podcast.

We covered a myriad of “stuff” around some of big questions around time and space and how we are all “being” at university just now. I really enjoyed the conversation – I hope you do too.

Time, space, models, AI and exams – a (few not so original) thoughts.

Oh my goodness, what a right old mess the UK has gotten into over this years school exams. Cancelled exams, statistical models, algorithms to ensure that the dreaded “grade inflation” didn’t happen all conspired to make what can only be described as an omnishambles.

Last week, the Scottish government did a swift U-turn on their results which has put pressure on the rest of the UK to do the same. As I write this a news alert has just popped up on my phone saying the PM has confidence in Education Secretary Gavin Williamson and Ofqual. Back in “normal” times that language was a signifier of a resignation or a sacking, however these days it may well mean that the PM does have confidence in his minister, and the agency despite the mixed messaging from them both over the weekend.

Perhaps one positive thing to come out of this mess is the start of a public debate about statistical modelling, the development and use of algorithms and the implicit and explicit bias that they almost always promote.

However, this is a very messy business and there has been a huge amount of human complicity and error here too. In was pretty obvious in March that these exams would not go ahead. 

Students themselves have (quite rightly) been very vocal, and visible in their anger, dismay and outrage at the overriding ‘logic” of the bigger pattern and the curve taking precedence over them as individuals. w.

The blame games have already started, with opposition parties seeing huge political capital to be made.  Calls for public inquiries , discussions about what to do next year are all I fear detracting from what is the fundamental issue – our over reliance on exams.

 If we had more continuous assessment and less reliance on final exams, if/ when another pandemic strikes or covid-19 has another spike, we wouldn’t have to worry about exam results or models to moderate grade inflation. Students work could be judged on their merits, there would be confidence in the marking through a shared learning outcomes (which if I am not mistaken do already exist). A more holistic view of students as people, with ideas, with agency, with the ability to express. share and reflect on their views would emerge.  

We could allow students to exploit digital technologies to develop their portfolios, to share their work more openly, to develop more cross curricular activity, to develop agency and critical thinking skills.  Much of this does happen in schools but still, the only thing that really counts are those final exams.  That incredibly stressful, unfair and to be honest quite archaic way of testing memory not knowledge and understanding.  

It’s said by many commentators that our current PM is a “crammer”. Had jolly japes at Eton, crammed for exams and through his loquacious use of slightly arcane language (see what I did there!)  got the grades and the interview patter to get into Oxford and sustain his career in politics and journalism.   The final result is what matters – Brexit, the last UK election, the ‘war’ on covid. . . . unfortunately we all have to suffer the chaos of the this period of uncertainty as we rumble from disaster to disaster. 

We could change the way we assess our children as they leave school. Teachers already have the skills, knowledge, understanding and technology to do it, we just need to rethink time, space and place for on going assessment. It would be cheaper and more effective imho to spend money on that than on a public inquiry into what has and is still happening with this years results.   

I have quote above my desk from a post I saw on social media early on in lockdown, it says “in the rush to return to normal, use this time to consider which parts of normal are worth rushing back to” (attribution Dave Hollis). I find it so sad that we seem to be rushing headlong back into exams instead of seriously contemplating the alternatives. Is this not is the perfect time to change that old “normal” to a far more equitable “new normal” for assessment?

saw this on twitter today - think we should all have this pinned up somewhere

Learning or attainment – What would you choose?

Bob Harris wrote an excellent article last week  about the new DfE strategy educational technology. This post is not about that per se, I can’t add any more to Bob’s excellent critique, rather it’s about what has been stuck in my head since reading the post, the focus on attainment not learning.

Learning is not included in the report, much to the surprise of many.  Bob reports this explanation from Deborah McCann, head of ed-tech at the DfE who
“ … astonished many attendees when she admitted that the term “learning” was deliberately excluded from the strategy. She said: “We have focused on attainment. There’s a view that ‘learning’ is a bit of a weak term really and there is a lot more that we are talking about – attainment and outcomes. That’s why you don’t see it in the strategy. … learning is the process, obviously, but what we want to see is attainment.”

Increasingly I am seeing attainment as a key strategic goal. I’ve seen a number of how to develop an effective digital strategy etc papers from ed tech/publishers and attainment, retention and outcomes are prominent but there’s little about learning. Apart from maybe a bit about personalised learning being enhanced through data and AI. You too can have a totally unique, homogenised personalised learning experience . . .

We can see this focus on attainment amplified outside education, particularly in run up  the  European Parliament elections and the focus on the attainment of Brexit.  

When Therese May won the last Tory party leadership contest she famously said  “Brexit means Brexit “. Over the last 3 years it has become increasingly clear that no-one has any idea what Brexit actually means (and has forgotten it’s a made up tabloid word). However the attainment of it, has become all consuming.  

As far as I can see,  there has been no attempt to learn about the process of leaving the European Union by the hard line right Brexiteers, or to engage the electorate in a meaningful discussion about just what that would entail. The promises of saving money that would go back into the NHS were backtracked on as soon as the referendum was over.

Early this week I heard a Brexit party candidate being interviewed the the radio. He was claiming that Brexit was the only way to improve the NHS, education and all the things people really care about. When challenged by the interviewer on what couldn’t been done through existing parliamentary and government process on these issues,  he paused for a bit then said something along the lines of “well I haven’t research any of that but I know it will all be easier once we are out of the EU”.  At that point, dear reader, I didn’t crash the car, but I may have shouted a few expletives at the radio.  

The attainment of Brexit was his overriding focus, the details of how that could be done, what would happen next – not really that important.  The lack of learning and process around the understanding of what Brexit is, and this all consuming focus  attainment of Brexit has created serious consequences.

We now have a zombie government, Nigel Farage back on the campaign trail, Boris Johnston setting himself up to lead another Tory Brexit charge. In the meantime our current national problems such as housing, education, the NHS never mind the global environmental crisis are, to my mind, being ignored as the attainment of Brexit overrules them all. 

Perhaps if our current government, and all leave political parties had taken a bit more time to really learn about the process of exiting the second largest trading block in the world, 40 years of trade and related treaties, human rights legislation etc, etc, and then share that in a meaningful way with the electorate, we would actually know what Brexit means. Then we could go through a meaningful learning process to decide if that really is what we need. In the meantime I’ll take learning over attainment any day. 

Here is an advert you might remember that kind of sums it up for me.

If the product works, but what about the people?

This is probably going to be an even more incoherent ramble than normal but I have been trying to write posts around a number of things for the last couple of weeks I’m going to try and merge them.

A couple of weeks ago, I read this post by David Wiley. At the time I tweeted:

I confess to a more than a bit of this sentiment, and not just in relation to OER,   “Much of the OER movement has a bad attitude about platforms.” I am always wary when the focus is on developing platforms and not developing the people who will use these platforms.

I was once in a meeting where I put forward the “people and process not platforms and products” case. I was told that what was being discussed was platform “in the Californian sense of platform”.  . .  I’m sure a classic WTF look must have passed over my face, but it was explained that this meant people as well as technology.  Geography aside, three years later this sense of platform doesn’t seem to be that wide spread or acknowledged. Maybe I need to go to California. But I digress.

Not long before the Wiley post I was reading the Pearson White Paper on learning design.  It caused me a bit of unease too.  Part of me was delighted to see learning design being recognised by, whatever might happen to them, a significant player in the education technology provider field.   Using learning design to help product design is a bit of a no brainer. Technology should be driven by educational need or as Pearson put it :

“Products and systems that effectively leverage learning design can deliver superior learning outcomes.”

One example in the paper referred to work they had done in social science classes

“we quickly recognized that students were easily distracted by conventional textbooks. This told us we needed to eliminate distractions: any extraneous cognitive load that doesn’t promote learning. Fortunately, our learning design work reveals many proven techniques for accomplishing this. REVEL segments all content into manageable pieces and presents it via a consistent structure. It provides strong signaling cues to highlight key material and places all relevant content on screen simultaneously to offer a continuous, uninterrupted experience”

Which kind of related to this point from the Wiley post:

“Our fixation on discovery and assembly also distracts us from other serious platform needs – like platforms for the collaborative development of OER and open assessments (assessments are the lifeblood of this new generation of platforms), where faculty and students can work together to create and update the core materials that support learning in our institutions. Our work in OER will never be truly sustainable until faculty and students jointly own this process, and that can’t happen until a new category of tools emerges that enables and supports this critical work. (Grant money for OER creation won’t last forever.)

And don’t even start trying to explain how the LMS is the answer. Just don’t. “

Well of course Pearson do try to explain that:

“As testing progresses, we can overcome problems that compromise outcomes and build a strong case that our design will support learning. The very same work also helps us tightly define assessments to find out if the product works in real classrooms”

Of course they don’t really touch on the OER aspect (all their learning design stuff has been made available with CC goodness) but I’ll come back to that.

That phrase “if the product works”, I keep coming back to that.  So on the one hand I have to be pleased that Pearson are recognising learning design. I have no argument with their core principles .  I agree with them all.  But I am still left with the niggle around the  assumption that the platform will “do” all the learning design  for both staff and students. That underlying  assumption that if only we had the right platform all would be well, everything could be personalised, through data and analytics and we’d have no retention issues.  That niggles me.

I was part of a plenary panel at the HESPA conference last week called “the future of learner analytics” where a number of these issues came up again.   The questions asked by this group of educational planners really stimulated a lot of debate. On reflection I was maybe a bit of a broken record.  I kept coming back not to platforms but people and more importantly time.  We really need to give our staff and students (but particularly our staff) time to engage with learning analytics.   Alongside the technical infrastructure for learning analytics we need to asking where’s the CPD planning for analytics?  They need to go hand in hand. Cathy Gunn, Jenny McDonald and John Milne’s excellent paper “the missing link for learning from analytics” sums this up perfectly:

there is a pressing need to add professional development and strategies to engage teachers to growing range of learning analytics initiatives If these areas are not addressed, adoption of the quality systems and tools that are currently available or underdevelopment may remain in the domain of the researchers and data analysis experts” 

There seems to be an assumption that personalisation of learning is a “good thing” but is it?  Going back to learning design, designing engaging learning activities is probably more worthwhile and ultimately more useful to students and society than trying to create homogenised, personalised chunked up content and assessments.  Designing to create more effective engagement with assessment and feedback is, imho, always going to be more effective than trying to design the perfect assessment platform.

In terms of assessment, early last week I was also at a Scotbug (our regional Blackboard user group) meeting, where I was in a group where we had to design an assessment system. This is what we came up with – the flipped assessment – aka student generated assessments.

img_0107

Not new, but based on pedagogy and technology that is already in use ( NB there’s been a really great discussion around some of this in the ALT list this weekend).   I don’t think we need any new platforms for this type of approach to assessment and feedback – but we do need to think about learning design (which encapsulates assessment design) more, and give more time for CPD for staff to engage more with the design process and the technologies they either have to,  use or want to use.  This of course all relates to digital capability and capacity building.

So whilst  we’re thinking about next gen platforms, learning environments, please let’s not forget people. Let’s keep pressing for time for staff CPD to allow the culture shifts to happen around understand the value of OER, of sharing, of taking time to engage with learning design and not just having to tweak modules when there’s a bit of down time.

People are the most important part of any  learning environment – next gen, this gen, past gen. But people need time to evolve too, we can’t forget them or try to design out the need for them for successful learning and teaching to take place. Ultimately it’s people that will make the product work.

Badges? Certificates? What counts as succeeding in MOOCs?

Oops, I did it again. I’ve now managed to complete another MOOC. Bringing my completion rate of to a grand total of 3 (the non completion number is quite a bit higher but more on that later). And I now have 6 badges from #oldsmooc and a certificate (or “statement of accomplishment”) from Coursera.

My #oldsmooc badges
My #oldsmooc badges

Screenshot of Coursera record of achievement
Screenshot of Coursera record of achievement

But what do they actually mean? How, if ever, will/can I use these newly gained “achievements”?

Success and how it is measured continues to be one of the “known unknowns” for MOOCs. Debate (hype) on success is heightened by the now recognised and recorded high drop out rates. If “only” 3,000 registered users complete a MOOC then it must be failing, mustn’t it? If you don’t get the certificate/badge/whatever then you have failed. Well in one sense that might be true – if you take completion to equate with success. For a movement that is supposed to be revolutionising the (HE) system, the initial metrics some of the big xMOOCs are measuring and being measured by are pretty traditional. Some of the best known success of recent years have been college “drop outs’, so why not embrace that difference and the flexibility that MOOCs offer learners?

Well possibly because doing really new things and introducing new educational metrics is hard and even harder to sell to venture capitalists, who don’t really understand what is “broken” with education. Even for those who supposedly do understand education e.g. governments find any change to educational metrics (and in particular assessments) really hard to implement. In the UK we have recent examples of this with Michael Gove’s proposed changes to GSCEs and in Scotland the introduction of the Curriculum for Excellence has been a pretty fraught affair over the last five years.

At the recent #unitemooc seminar at Newcastle, Suzanne Hardy told us how “empowered” she felt by not submitting a final digital artefact for assessment. I suspect she was not alone. Suzanne is confident enough in her own ability not to need a certificate to validate her experience of participating in the course. Again I suspect she is not alone. From my own experience I have found it incredibly liberating to be able to sign up for courses at no risk (cost) and then equally have no guilt about dropping out. It would mark a significant sea change if there was widespread recognition that not completing a course didn’t automatically equate with failure.

I’ve spoken to a number of people in recent weeks about their experiences of #oldsmooc and #edcmooc and many of them have in their own words “given up”. But as discussion has gone on it is apparent that they have all gained something from even cursory participation either in terms of their own thinking about possible involvement in running a MOOC like course, or about realising that although MOOCs are free there is still the same time commitment required as with a paid course.

Of course I am very fortunate that I work and mix with a pretty well educated bunch of people, who are in the main part really interested in education, and are all well educated with all the recognised achievements of a traditional education. They are also digital literate and confident enough to navigate through the massive online social element of MOOCs, and they probably don’t need any more validation of their educational worth.

But what about everyone else? How do you start to make sense of the badges, certificates you may or may not collect? How can you control the way that you show these to potential employers/Universities as part of any application? Will they mean anything to those not familiar with MOOCs – which is actually the vast majority of the population. I know there are some developments in California in terms of trying to get some MOOCs accredited into the formal education system – but it’s very early stages.

Again based on my own experience, I was quite strategic in terms of the #edcmooc, I wrote a reflective blog post for each week which I was then able to incorporate into my final artefact. But actually the blog posts were of much more value to me than the final submission or indeed the certificate (tho I do like the spacemen). I have seem an upward trend in my readership, and more importantly I have had lots of comments, and ping backs. I’ve been able to combine the experience with my own practice.

Again I’m very fortunate in being able to do this. In so many ways my blog is my portfolio. Which brings me a very convoluted way to my point in this post. All this MOOC-ery has really started me thinking about e-portfolios. I don’t want to use the default Coursera profile page (partly because it does show the course I have taken and “not received a certificate” for) but more importantly it doesn’t allow me to incorporate other non Coursera courses, or my newly acquired badges. I want to control how I present myself. This relates quite a lot to some of the thoughts I’ve had about using Cloudworks and my own educational data. Ultimately I think what I’ve been alluding to there is also the development of a user controlled e-portfolio.

So I’m off to think a bit more about that for the #lak13 MOOC. Then Lorna Campbell is going to start my MOOC de-programming schedule. I hope to be MOOC free by Christmas.

eAssessment Scotland – focus on feedback

Professor David Boud got this year’s eAssessment Scotland Conference off to a great start with his “new conceptions of feedback and how they might be put into practice” keynote presentation by asking the fundamental question ‘”what is feedback?”

David’s talk centred on what he referred to as the “three generations of feedback”, and was a persuasive call to arms to educators to move from the “single loop ” or “control system” industrial model of feedback to a more open adaptive system where learners play a central and active role.

In this model, the role of feedback changes from being passive to one which helps to develop students allowing them to develop their own judgement, standards and criteria. Capabilities which are key to success outside formal education too. The next stage from this is to create feedback loops which are pedagogically driven and considered from the start of any course design process. Feedback becomes part of the whole learning experience and not just something vaguely related to assessment.

In terms of technology, David did give a familiar warning that we shouldn’t enable digital systems to allow us to do more “bad feedback more efficiently”. There is a growing body of research around developing the types of feedback loops David was referring to. Indeed the current JISC Assessment and Feedback Programme is looking at exactly the issues brought up in the keynote, and is based on the outcomes of previously funded projects such as REAP and PEER. And the presentation from the interACT project I went to immediately after the keynote, gave an excellent overview of how JISC funding is allowing the Centre for Medical Education in Dundee to re-engineering its assessment and feedback systems to “improve self, peer and tutor dialogic feedback”.

During the presentation the team illustrated the changes to their assessment /curriculum design using an assessment time line model developed as part of another JISC funded project, ESCAPE, by Mark Russell and colleagues at the University of Hertfordshire.

Lisa Gray, programme manager for the Assessment and Feedback programme, then gave an overview of the programme including a summary of the baseline synthesis report which gives a really useful summary of the issues the projects (and the rest of the sector ) are facing in terms of changing attitudes, policy and practice in relation to assessment and feedback. These include:
*formal strategy/policy documents lagging behind current development
*educational principles are rarely enshrined in strategy/policylearners are not often actively enaged in developing practice
*assessment and feedback practice doesn’t reflect the reality of working life
*admin staff are often left out of the dialogue
*traditional forms of assessment still dominate
*timeliness of feedback are still an issue.

More information on the programme and JISCs work in the assessment domain is available here.

During the lunch break I was press-ganged/invited to take part in the live edutalk radio show being broadcast during the conference. I was fortunate to be part of a conversation with Colin Maxwell (@camaxwell), lecturer at Carnegie College, where we discussed MOOCs (see Colin’s conference presentation) and feedback. As the discussion progressed we talked about the different levels of feedback in MOOCs. Given the “massive” element of MOOCs how and where does effective feedback and engagement take place? What are the afordances of formal and informal feedback? As I found during my recent experience with the #moocmooc course, social networks (and in particular twitter) can be equally heartening and disheartening.

I’ve also been thinking more about the subsequent twitter analysis Martin has done of the #moocmooc twitter archive. On the one hand, I think these network maps of twitter conversations are fascinating and allow the surfacing of conversations, potential feedback opportunities etc. But, on the other, they only surface the loudest participants – who are probably the most engaged, self directed etc. What about the quiet participants, the lost souls, the ones most likely to drop out? In a massive course, does anyone really care?

Recent reports of plagiarism, and failed attempts at peer assessment in some MOOCs have added to the debate about the effectiveness of MOOCs. But going back to David Boud’s keynote, isn’t this because some courses are taking his feedback mark 1, industrial model, and trying to pass it off as feedback mark 2 without actually explaining and engaging with students from the start of the course, and really thinking through the actual implications of thousands of globally distributed students marking each others work?

All in all it was a very though provoking day, with two other excellent keynotes from Russell Stannard sharing his experiences of using screen capture to provide feedback, and Cristina Costa on her experiences of network feedback and feeding forward. You can catch up on all the presentations and join in the online conference which is running for the rest of this week at the conference website.

Enhancing engagement, feedback and performance webinar

The latest webinar from the JISC Assessment and Feedback programme will take place on 23 July (1-2pm) and will feature the SGC4L (Student Generated Content for Learning) project. Showcasing the Peerwise online environment the project team will illustrate to participants how it can be used by students to generate their own original assessment content in the form of multiple choice questions. The team will discuss their recent experiences using the system to support teaching on courses at the University of Edinburgh and the findings of the project. The webinar will include an interactive session offering participants the opportunity get first hand experience of interacting with others via a PeerWise course set up for the session.

For further details and links to register for this free webinar are available by following this link.

Binding explained . . . in a little over 140 characters

Finding common understandings is a perennial issue for those of us working in educational technology and lack of understanding between techies and non techies is something we all struggle with. My telling some of the developers I used to work with the difference between formative and summative assessments became something of an almost daily running joke. Of course it works the other way round too and yesterday I was taken back to the days when I first came into contact with the standards world and its terminology, and in particular ‘bindings’.

I admit that for a while I really didn’t have a scoobie about bindings, what they were, what the did etc. Best practice documentation I could get my head around, and I would generally “get” an information model – but bindings, well that’s serious techie stuff and I will admit to nodding a lot whilst conversations took place around me about these mysterious “bindings”. However I did eventually get my head around them and what their purpose is.

Yesterday I took part in a catch up call with the Traffic project at MMU (part of the current JISC Assessment and Feedback programme). Part of the call involved the team giving an update on the system integrations they are developing, particularly around passing marks between their student record system and their VLE, and the development of bindings between systems came up. After the call I noticed this exchange on twitter between team members Rachel Forsyth and Mark Stubbs.

I just felt this was worth sharing as it might help others get a better understanding of another piece of technical jargon in context.

css.php