I have been asked to step into the breech so to speak for the learning analytics interventions should always be mediated by a human debate later this week at Digifest.
The structure for the debate is as follows:
The machine will argue they can use learning analytics to provide timely and effective interventions to students improving their chances of achieving better qualifications. Machines don’t forget or get sick; learning analytics is more accurate and not prejudiced; evidence for automated interventions.
The human will argue although machines can make predictions they will never be 100% accurate; only a person can factor personal circumstances; automated interventions could be demotivating; automated interventions are not ethical.
Fortunately for me I have been given the human side of the debate. Unfortunately for the organisers, Leanne Etheridge is no longer able to attend. Leanne, I will do my best.
Preparation for the debate has started already with this blog post from Richard Palme aka “the opposition”. In order for me to get my thoughts into some kind of order for Wednesday morning’s debate, I’m going to try and outline my reactions to the provocations outlined in the post by my learned colleague
Richard has outline three key areas where he believes there is increased potential for data driven system interventions.
- First of all, humans have a long history of believing that when certain things have always been done in one way, they should stay that way, far beyond the point where they need to be. . . .If you look at Luddite rebellions, we thought that it should always be a human being who stretched wool over looms and now everyone agrees that’s an outdated concept. So, deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.
2. Secondly, people object that the technology isn’t good enough. That may, possibly, be the case right now but it is unlikely to be the case in the future. . . Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.
3. Thirdly, how good do we actually think people are? Certainly, human beings can empathise and pick up on non-verbal or even non-data-related signals from other people, but when was the last time a computer turned up to work hungover? Or stressed or worried about something – or just didn’t turn up at all?. . . . Will a computer ever be better than the perfect person? Maybe, maybe not. But, let’s face it, people aren’t perfect. . . .We worry about computers sending insensitively worded emails and inappropriate interventions but we all know human beings who are poor communicators, who are just as capable, if not more, of being insensitive.
Where to start? Well, despite us pesky humans almost falling at the first hurdle of not being able to be there in person – so unreliable! We can pick up challenge and a thread from where our colleagues have left off without the need for any additional programming. I don’t know what Leanne was going to say, but I really like the 2 quotes for the 2 slides she has selected. (I detect an air of confidence from only 2 slides!)
“ It is the supreme art of the teacher to awaken joy in creative expression and knowledge” Albert Einstein
“Every student can learn, just not on the same day, or in the same way” George Evans.
Going back to Richard’s post I believe there is a truly pressing need to challenge this apparently sensible, logical narrative. The narrative that is being spun around data and analytics is becoming an ever complex web for us to break out of. But break out of it we must! To paraphrase Paulo Freire it is time for some critical analytics. It is time to seriously consider the analytics of the oppressed.
Point 1 – On humans “deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.” I always worry when the Luddite card gets pulled into play. The negative connotations that it implies, negates the many, many skilled craftspeople who were actually fighting for their livelihoods, their craft. Audrey Watters explained this perfectly in her 2014 ALTC keynote Ed Tech Monsters.
“The Luddites sought to protect their livelihoods, and they demanded higher wages in the midst of economic upheaval,”
Sound familiar? It strikes me as uncannily similar to our current union campaigns for fair pay, to stamp out casualisation of academic staff contracts. But it’s ok because the overriding managerial narrative is that data can help us rationalise, to streamline our processes. It’s been a while since Friere wrote this, but again it rings true today.
Our advanced technological society is rapidly making objects of us and subtly programming us into conformity to the logic of its system to the degree that this happens, we are also becoming submerged in a new “Culture of Silence”
Point 2 – On technology not being good enough “Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.”
Data about our students will be more predictive? Our predictions will be “better” – better at doing what? Better at showing us the things we want to see? Getting our student “customers” through their “student success journeys” without any difficult interrogations, without the right to fail? Or actually stopping someone actually starting/continuing their educational journey because their data isn’t the “right fit”?
The promise of increasing personalisation fits into an overwhelming narrative from ed tech companies that is permeating through governments, funding bodies, University leaders. Personalisation is the future of education. Personalised alerts are the natural progression to student success. But are they just another form of manipulation? Assuaging the seemingly endless collective need to measure, monitor, fitbit-itize the educational experience? The words of Fierre again ring true.
One of the methods of manipulation is to inoculate individuals with the bourgeois appetite for personal success. This manipulation is sometimes carried out directly by the elites and sometimes indirectly, through populist leaders.
Point 3 Just how good are people anyway? We don’t turn up, we get ill and we are biased. Well all of those apply to most systems I’ve ever interacted with. Our own biases are intrinsically linked to the systems we develop, to the interpretations of data we chose to accept. As Fierre said
One cannot conceive of objectivity without subjectivity
I cannot agree that the downside of machine interventions are “no worse that humans doing it badly”. Surely we need to be engaging critically to ensure that no human or machine is doing anything “badly”.
The “system” should not just be replicating current bad practice. Data should provide us with new ways to encourage a richer dialogue about education and knowledge. Learning analytics can’t just be a way to develop alerting and intervention systems that provide an illusion of understanding, that acquiesce to not particularly well thought out government driven monitoring processes such as the TEF.
In these days of alternative facts, distrust of expert knowledge, human intervention is more crucial than ever. Human intervention is not just an ethical issue, it’s a moral imperative. We need to care, our students need to care, our society needs to care. I”ll end now with the words of the Cassandra of EdTech, Audrey Watters
In order to automate education, must we see knowledge in a certain way, as certain: atomistic, programmable, deliverable, hierarchical, fixed, measurable, non-negotiable? In order to automate that knowledge, what happens to care?
16 thoughts on “Time for Analytics of the Oppressed? – my starter for 10 for #digifest debate”
Many make the assumption that analytics are unbiased too, let’s also not forget that the machines are “programmed” by humans, and that the algorithms will always be, in some way, politically motivated, or hold a bias from the originator.
absolutely Lawrie – was saving that for the day😀
oh, sorry, feel free to delete 😉
no, it’s fine – you’re the only person that reads this blog anyway!
Great post Sheila. Work has been shared between humans and machines for a long time. It’s just that the pace has been gathering recently 🙂 On one level, the decision is how to share this work for the benefit of humanity eg we could delegate some work to machines to leave more time for the bits that humans do best but I don’t think that is what is happening always.
I am convinced by what Richard Edwards says “the work of the knowledge infrastructures of open education results in an inherent inscrutability within its practices, which is elusive in terms of significance, processes and effects.” http://www.tandfonline.com/doi/full/10.1080/17439884.2015.1006131
My personal concern is that the qualitative research that online and open education cry out for is neglected at the expense of quantitative research for which the data is more easily found and analysed. This is an example of the streetlight effect 🙂 https://en.wikipedia.org/wiki/Streetlight_effect
thanks Frances – will try and weave this into the debate on Wednesday too. I agree re research, we can’t just got for the “low hanging fruit”
As I said on Twitter, great title, Sheila, and bringing in Paulo Freire inside it is an additional bonus! I love where you’re going with this but would love it if you had the opportunity to take it further into more of Friere’s ideas with regards to praxis, consciousness-raising and empowerment of the oppressed.
Agree w Lawrie re algorithmic bias. The book Weapons of Math Destruction is enlightening on that front. I did my undergraduate computer science thesis using a genetic algorithm based neural network (of the type that teaches itself and evolves, similar to what some intelligent algorithms use now, i think) and even when the human designer has no control over the process the algorithm learns, humans make choices over WHICH inputs to feed the network and which outputs to consider successful. And those are all inherently biased. Also which dataset to feed the network so it learns. All of that is human choice.
The last point about care is a CRUCIAL one and often overlooked. People who talk about AI and LA replacing teachers reduce the roles of teachers to transmitting knowledge (kinda like Freire calling it banking model) rather than inspiring and caring – whereas I see these latter roles as the more important ones in education. What do you think?
What I think is interesting is the thinking of Paul Prinsloo on how to decolonize learning analytics such that learners possibly hold more power/control over their data and how it’s used. This could be a third path…debates tend to create false binaries
Thanks Maha – this is so useful. I was hoping to bring in praxis but just didn’t have time. However I think I will revisit this, I probably should work into a proper paper.
Debates aren’t always the best format for these kinds of things. I’m sure the person on the other side of the debate couldn’t possibly be on the extreme end of things if they’re any critical thinker at all. If you turn this into a paper, I can’t wait to read it 🙂
no they weren’t – in fact we basically agreed. If I do anything else I will let you know. I do have a another blog post I need to do after the event now and feedback – including yours.
This is wonderful Sheila. I was just scrolling through Twitter to delay the need to write the first major paper for my EdD. The topic? Looking at building a space for students to gain algorithimic and data understanding using various curricular lenses. You have provided good motivation to think carefully about Friere – a Decolonizing lens is also on the agenda so I’ll be sure to look at Paul’s work as well! Thank you again.
Thanks Tanya – so glad you found it useful – there is also a follow up post https://howsheilaseesit.blog/2017/03/20/i-wish-id-said-that-reflections-from-digifest17/ – all a bit of ramble! Good luck with your paper
Analytics like all human phenomena reflect & advance various beliefs; hide and restrict others. As w/ Luddism, same as it always was. More here http://rworld2.brookesblogs.net/
Indeed, and the beliefs of the rich and powerful few all to often drown out the needs of the rest of us, thanks George