Was the plane landing in the snow early last Wednesday morning, after some signs of warmer, spring air finally arriving, a sign that this year’s OER conference was going to be dealing with a mass of contradictions? Or it that just part and parcel of our everyday life now?
Anyway the sun did come out, and as ever the atmosphere at OER24 was warm, welcoming, open, critically informed. Thanks to Tom Farrelly and Gearóid Ó Súilleabháin for chairing another successful conference and to all the committee, ALT staff, and MTU staff and student helpers for all their contributions.
This year I have not been writing here as much as previously but the OER conferences always provide me with inspiration to write something here. This year I’m not going on a flight of speculative fiction like last year, rather I’m going to try and set out my stall for some small acts of critical resistance. So, are you ready? Then let’s begin.
GenAI loomed heavy over nearly every session I went to. It was also a key theme of the presentation I gave with Keith Smyth and Bill Johnston. We were fortunate that our presentation came after the amazing keynote from Catherine Cronin and Laura Czerniewicz. It’s no surprise that two leading open scholars would provide such a rich contextualisation of their own open practice but of the situation we all find ourselves in just now. You can read the essay that accompanies the keynote here and watch the recording here.
It does feel like everyday we are at not just a crossroad, but a precipice of climate change, political polarisation, war, famine, and general f***ed-upness. – or as Laura and Catherine more politely called it, the polycrisis . But despite all that, big tech companies are still feeding us the narrative that things can change for the better through AI, that once again technology will save us and keep the shareholders and “the markets” ticking along nicely and keep the rich rich and the rest of us in our place. We just have accept GenAI in education and do our best to re-frame what we do, how we “know”, it’s not going back in the box now. Or do we?
Catherine and Laura’s keynote was in many ways a call to arms asking us all what can we do, individually and collectively to meet the many challenges facing open education.
The way GenAI tools distort the 5 R’s of OER (retain, reuse, revise, remix, redistribute ) is quite a challenge to open education. Do we need OERs when we can just prompt AI to create something new, without having to worry about pesky copyright and citation? Now I’m not going to get into the copyright debate here (I don’t have the time or the knowledge) but Jisc has just published its “An introduction to copyright law and practice in education, and the concerns arising in the context of GenerativeAI” which is a good starting point.
From a personal point of view, one of the reasons I try to use and share OERs is not just about altruism (tho’ that is part of it), from a more practical and selfish perspective, if I release something with an open license I get attribution; if I share it through an open repository I can access that resource anytime, anywhere. If find and use an openly licensed resource I can see who has created it, I acknowledge them. I don’t just extract and move on.
In our talk we looked at a number of issues around critical pedagogy and AI, and how critical pedagogy could help us to address some of the challenges of AI and open education. How can we create alternative, meaningful narratives to challenge the Big Tech narrative? Some great work is already being done by many scholars ( shout out to Helen Beetham here and her imperfect offerings), but we need more porosity or leaky stories. Many of my friends don’t know about the environmental and human costs of AI, in fact some of them actually think “the cloud” is actually in the clouds, not on the ground using up masses of water and electricity.
In education it does seem that choice around using AI systems is increasingly disappearing. Whilst there is much great work going on around how to use these systems more critically (here and here are examples), maybe we should be thinking a bit less using the systems (and help to train the algorithms with every prompt we enter) and more about critically engaging with the terms and conditions of use (again a point highlighted by Laura in the keynote). So whilst many institutions are developing policies around use of AI, and publications such as the EU Ethical Guidelines on the use of AI and data in Education with sets of questions, the questions are really aimed at quite high institutional levels. I’m not sure if I could use them meaningfully. They are aimed more at awareness raising, many of them starting with “are teachers and students aware of . . .”. Which is fine as a starting point, but what level of “awareness” is really needed? What level of awareness do I need, do students need?
If (as someone mentioned to me at the conference) that they were “made aware” of MS Copilot being introduced the day before it went live, do they have time to even consider the implications for them, their work, their intellectual labour? Are the algorithms is it using transparent and explained in plain English? How are its “efficiencies” defined, and measured in the context of admin processes, learning and teaching etc? When you leave an institution, do you have a the right to withdraw your data from the copilot data set? Who/what is monitoring the outputs that the system is returning for accuracy? Is this just another version of a big system extracting our knowledge, and charging us to repackage it and sell it back to us?
I don’t know, maybe there are answers to these questions. But if there aren’t, surely this is where open educational practice comes into its own by providing the space to have discussion based on these types of questions. A form of Freire’s culture circles perhaps? And then share outputs (perhaps some standard questions that individuals could ask their institutions or use themselves to help navigation through Ts&Cs of any AI powered system) as OERs. These spaces, questions, outputs, could help us develop some small acts of critical resistance that just might help us collective create some new, open narratives and give us some hope for the future.
If you are interested in taking this further or have any other ideas, then please do let me know in the comments or by email and we can try and start to do something.
One final point about #OER24. It gave lots of us a chance to say thank you to Martin Weller for his work in open education. As you may know, Martin is leaving the OU and stepping down from the GO-GEN network in June so this was his last appearance at #OER in that capacity. I’m sure he will be back! But I just wanted to thank Martin for his open scholarship and practice. Through his blogging, not just writing but commenting on others blogs, he has opened so many doors for people like me to to engage with open education. Martin also took a bit of a chance and invited me to give a keynote (my first) at the OER15 conference. I’ll always be grateful for that opportunity. I wish him all the best for the next phase, and I have a sneaky suspicion open-ness will still be part of that.
Good stuff Sheila,
Arriving in Cork last week certainly proved that it is sleet and hailstones that come out of the cloud and not chunks of data!
Your points about the need to challenge and indeed change the impacts of AI systems on practice are well made. There does seem to be a problem in that the systems are running ahead of practice and folk are being swept along by the the providers and their agents in institutions, as the Copilot experience illustrates. This was acknowledged in sessions and lunchtime chats I was party to. As was concern about ethics and misuse of personal data from people engaging with systems without fully appreciating the extent of their incorporation into the corpus.
I reckon we should be aiming for mechanisms to ensure informed consent to AI procedures, otherwise we will be moved from being professional educators and scholars to becoming data fodder for systems we can’t control. OE supporters would be strong voices in defining what information is needed, how it would be discussed, when it would be required, where it would be stored, what form active consent would take and who would be responsible of making sure change was by consent and subject to scrutiny. This suggests a networked approach including administrators, technical services people, OER specialists and educational developers aligned to wider open discussion fora to ensure that any proposed technical fix is understood and supported by the wider community of lecturers, librarians and students in an institution.
To that end your idea of formulating and posing specific questions to elicit clarity is well founded. The processes need to be firmly grounded in the micro level of staff/student daily interactions with tech and not simply represented in high level institutional policy statements, possibly designed to safeguard the institution from complaints rather than facilitating the best educational outcomes. The familiar macro, messo, micro levels of institutional structure could be helpful in clarifying all the areas where good practice would occur. I noticed that versions of this triadic framing came up in most of the sessions I attended at OER24, however the tendency was to reference one or other level, usually macro (institutional strategy) or micro ( lecturers/courses/projects/practices), so an effort to carry out this kind of analysis as an institution wide project looks useful.
As to how? I think your mention of culture circles (a la Freire) is very helpful in suggesting a critical approach, so it would be good to hear more about that and how it might be done. Would it be possible to introduce a culture circle format at institutional level? Who would be involved? How would it relate to power and decision? On another tack, could a culture circle be designed to shape a format for a conference activity? Perhaps as a complement/alternative to the familiar workshop proposals? I think you were speculating about this at OER so maybe one to expand?
Catherine and Laura’s keynote was all you said it was so onwards to the essay!
A thought line I took from the keynote was that the polycrisis they described represented a low for democracy. This seems clear enough looking round the world today and disturbing given the large scale of democratic elections anticipated around the world in 2024. This also provides a challenge for OE in debating how best to intervene to help raise the democratic bar.
My immediate thoughts ran along the lines of engaging with the concept and practice of Deliberative Democracy – an approach to democracy focussed on deliberative discussion leading to informed decisions as opposed to more established approaches based in party politics, periodic elections, propaganda and media manipulations. In Deliberative Democracy decision making is referred out from government/state functions and set alongside elections/referenda to involve smaller representative populations in structured formats accessing reliable information and not relying solely on media presentations of issues or personal biases.
The familiar version is often termed a Citizens Assembly and these have been used in a variety of national contexts, typically to help resolve large and contentious social issues by reporting views and proposing advice to governments. Success varies in the political world and it is evident that governments can and do simply ignore the advice given by an Assembly. So the issue of people in relation to power is a key constraint on the process. However the process of selecting a representative sample of citizens willing to commit to regular sessions over a period of months, where experts provide information to the sessions with the locus of deliberative discussion and reporting residing with the assembly, seems sound.
Could we use this approach?
Maybe a Citizens Assembly to discuss open education could be mooted? AI certainly seems large and contentious enough to warrant an Assembly – for example the UK government held an AI ‘summit’ last year, but kept the participant list strictly limited to represent big tech leaders etc. It might sound unrealistic for the OER community to achieve, but it could be worth debating as a means of exploring how the social, political and economic policy considerations that are evident in OER discussions can be introduced into a wider democratic landscape. Given the state of flux you identified at the start of your post, and which I think is reflected in Catherine and Laura’s keynote, it might be worth giving a go.
Perhaps a more readily achievable idea would be to create a ‘Citizens Assembly of Open Educators’? The idea of setting up a representative group committed to ongoing and informed debate to address the current state of flux and the implications for education could be feasible. We do informing/ discussing all the time through annual conferences, presentations, scholarship and writing, so why not create a structured, ongoing assembly to report to the community and invite decision? This could be done by a mix of F-F and online communications and could be conceptualised as an international effort. The starting point would be to draft a brief for the Assembly and see how practical/attractive it might be.
Anyway I’m hoping that whatever else, OER25 will include a Tom Farelly Gasta or two.
Best wishes,
Bill.