On being (almost) a top banana and other thoughts on 2020

It’s the time of year for reflection, and boy what a year 2020 has been. To be honest I don’t think I have processed it all – not yet anyway. So, I thought I try to share a few thoughts inspired by bananas! Yes, dear reader, that’s not one of my many typos, I do mean bananas.

Last week I got a little, personalised infographic from Marks and Spencers giving me a view of my spending habits over the past year. To my surprise I was the 3 top buyer of bananas in my local M&S foodstore.

As I commented on Facebook, this was possibly the most useless bit of data ever. Oh how we all laughed! Making it to the top 3 banana buyer caused much hilarity and possibly my most popular and engaged (yes there were lots of comments not just likes and emjois) FB post of the year. I am still waiting for my end of year FB roundup – bet that will be a beauty too.

But the bananas did get me thinking and reflecting on data, numbers, customer profiling and personalisation. All key themes of 2020.

This year has been dominated by numbers. Shockingly high numbers of deaths due to COVID-19 – a series of blog post I wrote during lockdown all started with the ever increasing official UK death toll; ever increasing numbers of people made redundant due to the impacts of lock down, the profits some companies are still making, the ever increasing numbers of children in poverty, ever increasingly eye wateringly high numbers of personal wealth of the worlds billionaires, the increasing cost of Brexit, the increasing number of unelected (and at times slightly dodgy) Peers.

The divisions between the haves and have nots has become more acute and sadly the gulf seems to be getting greater. It’s all in the numbers . . .

But back to the bananas. I remember, back in the day when “learning analytics” was still just a slightly odd word combination, going to an event in Oxford where some “angel investors”, govt peeps and a few weirdos like me were invited to discuss how to save and share data (educational mainly). Some of you might remember the phrase “data lockers”. Anyway, it’s all a bit of a blur now really, but I do always remember Tony Hirst (the man who helped me really understand the power of data) commenting at the time about the lack of anyone from retail being there. I’m sure he said something along the lines of Tesco know more about us all than the government will ever do and they never share “their” data.

That’s always stuck with me – even as I swipe my Tesco clubcard, and various other ‘customer rewards cards’. Tesco are very clever about regularly sending me money off vouchers, that’s a trade off I can live with as they don’t share that data – it’s worth too much to them. However, the bananas and the infographic have got me thinking again about data manipulation, and personalisation.

At this point I need to to give a bit of context to the bananas. I like a banana as much as anyone else who likes them. I don’t eat or buy (so I thought ) an excessive amount of them. So to be in a top three buyer category did surprise me. Even more so as I don’t actually do that much food shopping in Marks and Spencers. If I had access to all my data I’m sure other non fruit food items and would top my banana purchasing – but back to that in a minute.

However, during lockdown, and particularly at the start of lockdown, I was finding it hard to get fresh fruit at my regular supermarket, but my local M&S was the exception so I did do more fruit shopping there than normal. As panic buying died down (only to reappear now!) and stocks became more plentiful, my fruit buying at M&S declined – the naughty stuff probably stayed the same, but I can’t be sure as I can’t access that data.

Now obviously M&S want to promote themselves as a “healthy” option, and appeal not only to my (completely non) competitive nature, by rewarding me with positive news about my shopping habits. Being more than content with a “bronze medal” as someone put it, I really have no intention of trying to improve that rating – or spend any more money on bananas. But I am very curious now about the data (aka my data) that M&S hold, how the decisions around what to share with me about my apparent preferences, and inferred lifestyle choices were made.

I’m pretty sure my M&S dashboard has a whole series of other views that it could have shared. And that got me thinking about how data can, and is, manipulated to provide a seemingly ‘personalised’ view of “stuff”. A view that is not actually centred on helping me maintain a healthy, balanced diet and lifestyle but is actually all about getting me to spend more money in a retail outlet. And that brought me back to education, and data, and personalisation.

So many ed tech companies are selling ‘the dream’ of personalisation through their data platforms – but is it really personalisation? Or is it just a thin veil of a ‘user first name’ being placed in certain pages, some choice of colour options, in a “personalised’ way to another 3 and half million “users’ of homogenised content and quizzes? What/where is the wider context of the “user” (aka learner ) data being used?

There have been too many people this year to mention in this post who raise these questions in a far more informed and nuanced way, in particular Ben Williamson for keeping a constant track of ed tech investment and “innovation” and Audrey Watters for her continued role as the Cassandra of Ed Tech, and in particular the rise of surveillance in education. These methods of track and trace I approve of!

Which brings me back, not quite to the bananas, but to more numbers, data, and notions of data surveillance that the COVID 19 pandemic has raised. Again like many, I was quite skeptical of the original track and trace app the UK government had planned. One step closer to a dystopian Big Brother State loomed . . . but that hasn’t quite happened, and afaik the track and track app is quite safe to use.

But in the same way I have become accustomed/ accepted/ complied (not quite sure what the right word is here so you can choose which one you think best fits) to retail consumer profiling and trade offs, I am slightly worried by some internal conversations I am having.

Would I trade a “little bit” of surveillance in terms of data about for example being COVID tested, about when (with a big IF caveat here) I get vaccinated, to be allowed to do a little bit more for example, be able to visit friends and family who live in a different part of the country/ the world, have people in my house? Is a bit of data about my health going to be price of freedom in 2021? And who will own that data? What inferences will be made from that data? Possibly a bit much for this almost top banana to figure out. Perhaps I need to work this out in a speculative data story?

In the meantime I wish you a very Merry Christmas and a hopefully brighter 2021.

Data monsters, super creeps – the role of AI in learning and teaching

Earlier this week I was invited to take part in a round table discussion hosted by TES and Jisc at the University of Glasgow. This was a very small event, there were 10  of us and it was a sort of pre-conference event before the official start of the THE Teaching Excellence Summit

I’m not quite sure how I got invited but I suspect it was a combination of being a ‘kent’  face to Jisc, being local and being female. It probably comes as no surprise that I was one of only two women in the group, and that everyone in the room was over 40 and white . . .

It was an interesting, free flowing discussion which is being written up for a future TES article.  Nothing earth shatteringly new discussed. We made no major breakthroughs.  I may have ranted a bit . . . well I kind of felt I had to as I was the only person at the table from a university lower than DPVC level.

We didn’t really talk about AI as such that much but we did talk about data; how to get it, how to use it, who owns it, and a little bit about what it could do for learning and teaching.   This inevitably brought us to retention, predictive analytics and ethics. It was heartening, dear reader,  to see the agreement in the room about just exactly how data can help or not with this, and the need for much more research around what data is actually meaningful to collect and how to then make (and resource) effective interventions.  I also made sure I got in the point that data not being neutral and the bias inherent in AI. I possibly couldn’t resist using the analogy of the make up of the group sitting at the table having the discussion. . .

We did have quite a bit of discussion about the role of edtech companies, the seemingly never ending issues of (lack of) interoperability in university systems, and just what is it we are trying to do with data. Nothing particularly new really seemed to be the consensus. But still we are being told that the “business” of education must be able to be improved with data, AI, machine learning.  I may have ranted a bit more about the current political climate, the danger of the promise of “personalisation” and the reality of  increasing homogenization. 

There was a throw away remark about “feeding the beast” in relation to all the data/data exhausts that could be “consumed” and “industrialised”. At that point, David Bowie popped into my head. Well not literally but the lines “scary monsters, super creeps” , except this time I was changing the words in my head to “data monsters, (neoliberal) super creeps.”

I do think there is potential for data and some elements of AI within education and wider society. I also think that just now I think it’s really, really important that we in education are leading critically informed discussions with our students, the rest of society about how “it” all actually works, who is in control, who is programming the AI , who owns data. If we don’t do it, then we will just be consumed by the edtech super creeps.  They will inevitably sell our data back to us, in workflows they think are appropriate for an efficient (ergo effective), dashboarded to the max student journey but actually might not be that great a learning or teaching experience.

I didn’t take that many notes, and I’m looking forward to reading the TES article if/when it appears and I can maybe write a more considered reflection then. In the meantime I’ll leave you with a bit of David’s scary monsters.

 

 

 

Talking assessment data and dashboards

GCU is part of a group of 12 institutions across the UK who are taking part in a small pilot project with Jisc and Turnitin as part of the wider Effective Analytics Programme.  The project is exploring how (and more importantly) what data from Turnitin can be used effectively within the Jisc Learning Records Hub.  A key part of this work is engaging with stakeholders from across all the institutions involved in the project.  To this end, Kerr Gardiner is facilitating a series of workshops with each institution and earlier today it was our turn.

Although we can get data from Turnitin, as with quite a lot of systems, the reports that we can access are all pre-designed by Turnitin. Although we can access high level data in terms of overall numbers of submissions, marks with various types of feedback (quickmarks, audio, etc, marks using rubrics, and we can access these at module level, it’s all either huge or not so huge CSV files, and is missing some, of what we consider to be, vital data.

So it was good to have an opportunity to discuss what our needs and priorities are.  One of our key requirements, and frustrations, is that we can’t get date stamps for when assessments are uploaded and then when the marks and feedback are submitted.  Like most institutions we have an agreed feedback turnaround time, and it would be really useful to see if we are meeting that.  That data is not available to us.  It would be really good if it was.

We also had quite a bit of discussion around some of the UI issues which  relate to data too.  The new Turnitin Feedback studio interface is really user friendly, but stetting up an assignment is still quite clunky and it’s really easy to miss some of the vital parts – like the grading information. A few tweaks with that might be really useful. We also discussed having an option to mark if an assignment was formative or summative as part of the set up. That would be another really useful data set to have for whole host of reasons around assessment weighting.

We were also asked to think about dashboards. Is it just my imagination or is it illegal to have any kind of discussion about learning analytics without mentioned dashboards?   Just now our focus for assessment data is really to provide staff with more relevant access to data.  I think it terms of overall learning analytics there is an opportunity to get far greater buy in and a more nuanced discussion about data and learning when it is the context of assessment from staff.

Assessment and feedback is always high on everyone’s list, but we need to be really mindful of how and when we provide data to students around assessment due to the complex emotional impact that it can have on students.  In her recent post on student dashboards, Anne Marie Scott highlighted the need for more careful thought around the development of student dashboards.  She also refers to Liz Bennett’s recent research around student dashboards and the notion of thinking of them more as socio-material assemblages.  I really hope that part of this Jisc work will be on understanding data needs and working with staff first before rushing to add other elements to their developing student facing dashboard.

Anne Marie also highlighted the need for greater understanding and development of feedback literacy, getting students to recognise and understand what feedback is. Part of our discussions were around having a way not just to record if/when students have accessed feedback, but also a way for students to feedback on their feedback. Perhaps an emoji to indicate if they were happy with the feedback. Again access to this type of data could be really useful at a number of levels and help to start some more data informed discussions, and be a small part in the development of wider feedback literacy.

I’m looking forward to seeing how this work progresses over the coming months, and thanks to Kerr for his informed facilitation of the session – and of course for introducing us to giant post-it notes.

 

photo of giant post it notes

 

 

 

Exploring a bit of the magic source, neuroliberalism and what data does and doesn’t know about me

In this weeks HEWN newsletter (no AI or algorithms there, just good old human research, editing, evaluation and critique) Audrey Waters said:

“If there is one article I would insist those in education / technology read this weekend, it’s this one by Ben Williamson: “Learning from psychographic personality profiling.” Really. Read it.”

I did  – so should you.

I’ve never been a fan of any kind of personality profiling or psychometric testing. In one of my previous jobs we were were subjected to psychometric testing as part of team building days. I hated it.  It didn’t serve any purpose that I could see – and it was all done on paper which I think  was destroyed. However I am aware that even back then it was used more regularly and rigorously by many companies to sort, select and manage employees.

As the Cambridge Analytica Facebook data scandal has shown it is now being used as the basis of digital profiling.  If you’re worried at all about data driving personalised learning, the sinister sausage machine of education,  then we all need to be looking to the work of people like Ben.  How this type of data profiling is and will be sold to education is a major concern. Particularly if we really want to allow higher education to be inclusive to be able to help with widening participation and address the attainment gaps that certainly here in Scotland sadly seem to be growing every year.  I had never heard the term “neuroliberalism” before reading the Ben’s post but it could now be my favourite new word.

Following my post yesterday, and reading Ben’s post I decided to have a closer look at my own data using a bit of data magic from  Magic Source,  This “service” developed and run by the University of Cambridge is

A personalisation engine that accurately predicts psychological traits from digital footprints of human behaviour

Using your Facebook and or Twitter data it will predict:

your psycho-demographic profile from digital footprints of your behaviour. It reveals how you might be perceived by others online and provides detailed insights on your personality, intelligence, life satisfaction and more.

Predictions are based on opt-in psychological ground truth from over 6 million volunteers, and our data has been used in over 45 peer-reviewed scientific articles.

So what did it make of me?  Well,  from my twitter data it deduced that I am 33 (FTW!) but it was not so sure about my gender.

In terms of the “Big 5” personality traits I am kind of artistic and liberal but a bit of a loner.

Going into a bit more depth it would appear that I am quite open to “things”

and my Jungian personality is INJT – introverted, intuitive, thinking,  judging. I’m also totally average.

So what about Facebook? Well still not sure but thought I was morel likely to be female.

In terms of the Big 5 some subtle differences from Twitter – but basically the same.

However the more interesting thing about the Facebook data is that “the magic” tells you what like make you appear more/ less impulsive and more/less artistic and liberal

I’m also, according this limited data profile, I’m about averagely intelligent and could be happier,  but there is “still a chance that I might be brighter than the average person”

Again quite interesting to see what likes make me appear more or less intelligent.

 

My Jungain personality type this time around is ISTJ  – introverted, sensitive, thinking judging.

But probably more interesting is the political and religious inferences this data magic produces.

And the likes and dislikes it basis this on.

It also seems to think I am a nurse . ..

So what does it all mean? Should I be relived that this isn’t that accurate, that I should just stop liking anything related to shopping to make me appear more intelligent? Should I start liking more sites and posts that make me appear more liberal and artistic Should I just carry on regardless?  Should I be worried that my actual self may be disregarded, not given an opportunity to get a new job based on this data?

What I should be worried about is what Ben says

Expert knowledge about students is increasingly being mediated through an edu-data analytics industry, which is bringing new powers to see into the hidden and submerged depths of students’ cognition, brains and emotions, while also allowing ed-tech companies and policymakers to act ‘smarter’, in real-time and predictively, to intervene in and shape students’ futures.

When will that be applied to staff and what measures will be applied to us? How critical will we be allowed to be? What will the neuroliberal indicators of staff suitability be?

AI,Self driving cars, and the joy of getting lost: thoughts from #connectmore17

hans-m-62126

(image: unsplash)

Martin Hamilton, Jisc’s “Futurist” gave the keynote talk at yesterday’s connectmore17 event held, very handily for me at my institution, GCU.  Martin gave an entertaining overview of some technological developments weaving his way from computer games, to drawing (cats) programmes, to space rockets to self driving cars.  All of these developments said would, and are have an impact on education.  As he was talking I couldn’t help but think about the talk Audrey Watters gave at the University of Edinburgh earlier this year called Driverless Ed-Tech: The History of the Future of Automation in Education.  I could end this post here by saying -just read that post, but I’ll add a bit more context.

Throughout the day there was a lot of really useful collective sharing of practice, issues, challenges and you know, all the face to face, networked,  discursive “good stuff ” this kind of event engenders.  There was a real feeling of “the collective”.  To use Audrey’s analogy, it felt like the majority of us were all on the same bus.

During the closing panel session (which was sadly only save from being a manel by including me) it probably won’t surprise you that the issue of the future of education  and the role of learning analytics  came up.  And again we came back to self driving cars, that narrative that with a little automation we could really make impacts on the personal journeys of our students. Now, I may have had a rant or two about this over the day and during the session, but as it is typical with me, it’s only later that I actually figure out what I should have said – hence this post.

Drivers-less cars, using data from us “real drivers” have the potential (are-ish) to get us from A to B in the most efficient manner – personalised of course to our individual preferences, without us realising that our personal experience is a default setting that a couple of million others will be experiencing.

I’m quite good at getting lost – even with sat nav – but I generally manage to get where I need to go. Sometimes my detours are very frustrating and waste lots of time, other times they take me to really unexpected places and people.  So although I do admit to enjoying the safety net of GPS I’m not reliant on it. I find road signs, street names and at times even people are pretty helpful in finding places.

That’s my fear for education, and again Audrey writes about this far more eloquently than I ever could.  The illusion of personalisation, the ever growing demand for successful ‘learner journeys’  from enrollment to graduation in the most efficient way should worry us all. Education should take you to new, sometimes unexpected and challenging places.  I’m not saying there shouldn’t be support and guidance available,  We’re pretty good at providing the educational equivalents of road/street signs, and people to ask help from. It’s just that sometimes it’s really good to go for a wander, to get lost.

The more we try and lock journeys down, monitor and measure things, we may lose some really interesting people because they will just not be able engage because their profile doesn’t fit, or we may loose more people from “the system” as they may go completely off grid. That in itself may be really exciting, but for me just now, I find that a bit sad.

 

 

I wish I'd said that . . . reflections from #digifest17

You know how it is, despite how much you plan for a debate/live speaking situation,  there’s always something that pops in to your head on the train home that makes you think, “oh I wish I’d said that.”  Since last week’s digifest I have had several of those moments.

As I wrote about last week, I took part in the “do analytics interventions always need to be mediated by humans” debate.  I was defending that  motion. I tried to explain my thoughts in this post.  Richard Palmer from Tribal put up a strong case taking the other view. In the end, despite me claiming a Trump like spectacular, popular victory ( Many people said so), the final vote was pretty close.  Due mainly to the word “always” and Richard’s pretty convincing argument that there are some alerts and “low level” interventions  can be automated and so do not “always” need human intervention.

However, of course they do. The final intervention/ action from any alert, analytics intervention has to be mediated by a human. In the context of our debate that means a student actually doing something as a direct result of that intervention. I wish I’d said that. And if students just ignore the automated alerts/interventions – what then? Are we measuring and monitoring that?  And what if all the power goes off?  What about alerts then? What happens when a student challenges the alert system for allowing s/he to fail? Oh, I wish I had said that  . . .

We do already alert students in a number of ways and we need to ensure we are having a dialogue with students so that we all understand what are the things that are actually motivating, and keep being motivating so that any student apps/alert systems we do produce don’t just suffer from the fitbit syndrome where obsession doesn’t actually lead to motivation but to disengagement.

The other thing – well it’s actually a word – that I wish I had said was “praxis”.  Part of my argument was to (very quickly and I confess somewhat superficially as I didn’t have a huge amount of time to prepare for the debate) draw some comparisons with learning  analytics and Freire’s  seminal Pedagogy of the Oppressed.  I did want to get the notion of praxis into the debate but on the day it didn’t quite happen.  However Mahi Bali picked this up over the weekend and commented on my blog.

“great title, Sheila, and bringing in Paulo Freire inside it is an additional bonus! I love where you’re going with this but would love it if you had the opportunity to take it further into more of Friere’s ideas with regards to praxis, consciousness-raising and empowerment of the oppressed. . . .What I think is interesting is the thinking of Paul Prinsloo on how to decolonize learning analytics such that learners possibly hold more power/control over their data and how it’s used. This could be a third path…

I couldn’t agree more. I think it really is time to discuss praxis in this context. Which brings me back to the core part of my argument last week. We need to have more debate and dialogue around learning analytics and the theoretical approaches we using to frame those dialogues.

I know this is a sweeping generalisation, please forgive me dear reader, but I do worry that emerging design models, partly driven by more fully online delivery, are defaulting to the now seemingly standard: read/watch, quiz, bit of “lite” discussion on the side of the page, badge/certificate  and repeat.  They are easy to measure, to “alert-ify”.  But they are not always the best educational experience.

I missed LAK this year and only so a few tweets so I’m sure that there is a lot of work going on a much higher levels in the learning analytics community. However there is still the nagging feeling in the back of my brain that discussing bayesian regression modelling is still quite dominant. I know last year at LAK there was a concerted effort to work with the learning sciences community, to bring in more learning theory.  But reflecting on last week, it seems to me that behaviourism is going to become (even more) embedded in our systems, in our KPIs, without us actually realising it or having the chance to have a an informed dialogue with our practising teachers and students. A post from Doug Clow from back in 2011,  springs to  mind, is the sinister sausage machine here?

Learning analytics, at least in digifest terms, seems to be the current “future now”.  There were so many session with it as their main theme, it was hard to avoid it. On the one hand I think this is great to see. The debate, the dialogues I have been arguing for are being given a chance to begin. We just need to ensure that they are given enough critical space to continue.  And to that end I guess I should get my “butt in to action” and maybe take a bit more time to write something a bit more informed about praxis.  In the meantime here’s a short interview where Richard and I try to summarise our debate.

Time for Analytics of the Oppressed? – my starter for 10 for #digifest debate

Analytics of the Oppressed(1)

I have been asked to step into the breech so to speak for the learning analytics interventions should always be mediated by a human debate later this week at Digifest.

The structure for the debate is as follows:

The machine will argue they can use learning analytics to provide timely and effective interventions to students improving their chances of achieving better qualifications. Machines don’t forget or get sick; learning analytics is more accurate and not prejudiced; evidence for automated interventions.

The human will argue although machines can make predictions they will never be 100% accurate; only a person can factor personal circumstances; automated interventions could be demotivating; automated interventions are not ethical.

Fortunately for me I have been given the human side of the debate.  Unfortunately for the organisers,  Leanne Etheridge is no longer able to attend.  Leanne, I will do my best.

Preparation for the debate has started already with this blog post from  Richard Palme aka “the opposition”.  In order for me to get my thoughts into some kind of order for Wednesday morning’s debate,  I’m going to try and outline my reactions to the provocations outlined in the post by my learned colleague

Richard has outline three key areas where he believes there is increased potential for data driven system interventions.

  1. First of all, humans have a long history of believing that when certain things have always been done in one way, they should stay that way, far beyond the point where they need to be. . .  .If you look at Luddite rebellions, we thought that it should always be a human being who stretched wool over looms and now everyone agrees that’s an outdated concept. So, deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.  

2. Secondly, people object that the technology isn’t good enough. That may, possibly, be the case right now but it is unlikely to be the case in the future. . . Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.

3. Thirdly, how good do we actually think people are? Certainly, human beings can empathise and pick up on non-verbal or even non-data-related signals from other people, but when was the last time a computer turned up to work hungover? Or stressed or worried about something – or just didn’t turn up at all?. . . . Will a computer ever be better than the perfect person? Maybe, maybe not. But, let’s face it, people aren’t perfect. . . .We worry about computers sending insensitively worded emails and inappropriate interventions but we all know human beings who are poor communicators, who are just as capable, if not more, of being insensitive.

Where to start?  Well, despite us pesky humans almost falling at the first hurdle of not being able to be there in person – so unreliable!  We can pick up challenge and a thread from  where our colleagues have left off without the need for any additional programming.  I don’t know what Leanne was going to say, but I really like the 2 quotes for the 2 slides she has selected.  (I detect an air of confidence from only 2 slides!)

“ It is the supreme art of the teacher to awaken joy in creative expression and knowledge”  Albert Einstein

“Every student can learn, just not on the same day, or in the same way” George Evans.

Going back to Richard’s post I believe there is a truly  pressing need to challenge this apparently sensible, logical narrative.  The narrative that is being spun around data and analytics is becoming an ever complex web for us to break out of. But break out of it we must!  To paraphrase Paulo Freire  it is time for some critical analytics. It is time to seriously consider the analytics of the oppressed.

Point 1 – On humans “deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.” I always worry when the Luddite card gets pulled into play.  The negative connotations that it implies, negates the many, many skilled craftspeople who were actually fighting for their livelihoods, their craft.  Audrey Watters explained this perfectly in her 2014 ALTC keynote Ed Tech Monsters.

“The Luddites sought to protect their livelihoods, and they demanded higher wages in the midst of economic upheaval,”

Sound familiar? It strikes me as uncannily similar to our current union campaigns for fair pay, to stamp out casualisation of academic staff contracts.   But it’s ok because the overriding managerial narrative is that data can help us rationalise, to streamline our processes. It’s been a while since  Friere wrote this, but again it rings true today.

Our advanced technological society is rapidly making objects of us and subtly programming us into conformity to the logic of its system to the degree that this happens, we are also becoming submerged in a new “Culture of Silence”

Point 2 – On technology not being good enough Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated.

Data about our students will be more predictive? Our predictions will be “better” – better at doing what?  Better at showing us the things we want to see? Getting our student “customers” through their “student success journeys” without any difficult interrogations, without the right to fail?  Or actually stopping someone actually starting/continuing their educational journey because their data isn’t the “right fit”?

The promise of increasing personalisation fits into an overwhelming narrative from ed tech companies that is permeating through governments, funding bodies, University leaders. Personalisation is the future of education. Personalised alerts are the natural progression to student success.  But are they just another form of manipulation? Assuaging the seemingly endless collective need to measure, monitor, fitbit-itize the educational experience?  The words of Fierre again ring true.

One of the methods of manipulation is to inoculate individuals with the bourgeois appetite for personal success. This manipulation is sometimes carried out directly by the elites and sometimes indirectly, through populist leaders.

Point 3 Just how good are people anyway? We don’t turn up, we get ill and we are biased. Well all of those apply to most systems I’ve ever interacted with. Our own biases are intrinsically linked to the systems we develop, to the interpretations of data we chose to accept.  As Fierre said

One cannot conceive of objectivity without subjectivity

I cannot agree that the downside of machine interventions are “no worse that humans doing it badly”. Surely we need to be engaging critically to ensure that no human or machine is doing anything “badly”.

The “system” should not  just be replicating current bad practice.  Data should provide us with new ways to encourage a richer dialogue about education and knowledge. Learning analytics can’t just be a way to develop alerting and intervention systems that provide an illusion of understanding, that acquiesce to not particularly well thought out government driven monitoring processes such as the TEF.

In these days of alternative facts, distrust of expert knowledge, human intervention is more crucial than ever. Human intervention is not just an ethical issue, it’s a moral imperative.   We need to care, our students need to care, our society needs to care. I”ll end now with the words of the Cassandra of EdTech, Audrey Watters

In order to automate education, must we see knowledge in a certain way, as certain: atomistic, programmable, deliverable, hierarchical, fixed, measurable, non-negotiable? In order to automate that knowledge, what happens to care?

Clawing my way up through the trough of disillusionment with learning analytics

512px-Gartner_Hype_Cycle.svg

(image: Jeremykemp at English Wikipedia [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons)

Warning -this is a bit of a moan post.

Last week I attended the Jisc Learning Analytics Network meeting. It was a really good day, lots of people there, lots of good sharing, moaning, asking where next-ing.  One of the reasons I find these events useful is that they help focus my mind and give me a sense of relief that some of the challenges that I face are similar, if not exactly the same, as many others in the sector.

In terms of learning analytics, my experiences to date have been metaphor-tastic: (ever decreasing) circles, slopes, dead ends, stop-starts . . . I feel that it’s appropriate reflect on my journey via the well trodden Gartner hype cycle.

I’m the first to admit I enjoyed being swept up to the peak of inflated expectations. Exploring the potential of data and learning analytics was probably the last piece of innovation work I was involved in when I work with Cetis. I really enjoyed trying to figure out the practical applications and meanings for mainstream learning and teaching of the swirly twirly graphs at early LAK conferences. It was great to support the emerging UK community via early SoLAR meeting.  I learnt a huge amount being involved in the Cetis Analytics Series.  I always think I brought a  healthy degree of scepticism to some of the hype of learning analytics, but I could  (and still can) see the benefits of extracting, exploring and understanding data around learning and teaching.

From the giddy heights of the peak of inflated expectation, I knew when I moved to a “proper job” within a university I would have a bit of a slide down the slope to the trough of disillusionment. It’s getting out of the trough that I’m finding real difficulty with. Changes in senior management, have meant going through a bit of a treadmill in terms of gaining institutional support and understanding. That’s before even accessing any data.

The Jisc Effective Analytics Programme has been a bit of ray of light and hope for me. Towards the end of last year we took part in the Discovery phase of the programme. This involved a consultancy exercise, onsite for 3 days with a cross section of institutional stakeholders to assess our “readiness” for analytics. At the end of the exercise we got a report with our readiness matrix and some recommendations.  You can view our report here.

At the meeting last week a number of institutions who have gone through the Discovery phase took part in a panel discussion about the experience.  One common thread was the reassurance that the exercise gave to everyone in terms of being “on the right track” with things.  I was pleasantly surprised that we got such good score in terms of our cultural readiness. The validation of having an external report from a nationally recognised agency such as Jisc is also incredibly useful for those of us on the ground to remind/cajole (hit people of the head – oh wait that’s only in my dreams) with in terms of what we should be doing next.

I think one of the main problems with analytics is finding a starting point. Going through the Discovery phase does give a number of starting points. My frustration just now is that my institution is now going through a major rethink of our overall data architecture. So on the one hand I think “hurrah” because that does need to be done. On the other I feel that I am almost back to square one as terms of “business needs” anything to do with learning and teaching seems to fall off the list of things that need to be done pretty quickly.  It’s difficult to juggle priorities, what is more important, getting our admissions process working more efficiently or developing ways to understand what happens when students are engaging (or not) with modules and the rest of the “stuff” that happens at University? Or updating our student record system, or updating our finance systems?

Amidst all this it was good to get a day out to find out what others are up to in the sector. Thanks Jisc for providing these networking events. They really are so useful for the sector and long may they continue. UEL who hosted the event have been doing some great work over the past four years around learning analytics which has emerged from their original BI work with Jisc. The work they have been doing around module attendance (via their swipe card system and VLE data) and performance is something I hope we can do here at GCU sometime soon.

In the morning we got updates from 3 mini projects just have funded starting with the University of Greenwich and their investigations into module survey results and learning outcomes. The team explain more in this blog post. I was also very interested in the Student workload model mini project being developed at the OU.  You can read more about it here.

The other mini project from the University of Edinburgh, was interesting too, but in a different way. It is more what I would term, a pure LA research project with lots of text data mining, regression modelling of (MOOC) discussion forums. Part of me is fascinated by all of this “clever stuff”, but equally part of me just thinks that I will never be able to use any of that in my day job.  We don’t have huge discussion forums, in fact we are seeing (and in many ways encouraging) less use of them (even with our limited data views I know that) and more use of wikis and blogs for reflection and discussion. Maybe these techniques will work on these areas too, I hope so but sometimes thinking about that really does make my head hurt.

I hope that we can start moving on our pilot work around learning analytics soon. ’Til then, I will hang on in there and continue my slow climb up the slope, and maby one day arrive at the plateau.

Looking in the mirror to discover our institutional capability for learning analytics

picture of a mirror

(image CC Share Alike https://commons.wikimedia.org/wiki/File:Mirror_fretwork_english_looking-glass.png)

It’s been a busy week here at GCU Blended Learning Towers.  We’ve just finished the onsite part of of the Jisc Effective Analytics Programme. So this week has been a flurry of workshops and interviews led by the consulting team of Andy Ramsden and Steve Bailey. Although Andy and Steve work for Blackboard, the discovery phase is “platform agnostic” and is as much about culture and people as technology.  The evaluation rubric used had more about culture and people than technology.  Having a team who really understand the UK HE sector was very reassuring. Sadly, it’s not often that you can say that about and HE.

I think GCU is the second institution to go through the discovery process, I know there are quite a few others who will be  doing the same over the next six months. The process is pretty straightforward and outlined in the diagram below.

discovery process diagram

A core team from the institution have a two online meetings with the consulting team, relevant institutional policy/strategy documentation is reviewed before the onsite visit. At the end of the onsite visit an overall recommendation is shared with early findings, before a final report is given to the institution.

I was pleased (probably slightly relieved too) that we got a “ready with recommendations”.  That’s what we were hoping for.

Although we are still awaiting the final report, the process has already been incredibly useful. It has allowed us to bring together some of our key stakeholders; (re)start conversations about the potential and importance of learning analytics; the need to develop our infrastructure, people and process to allow us to use our data more effectively. The final report will also be really helpful in terms of helping us focus our next steps.

Andy described the process as a bit like “holding a mirror to ourselves” which is pretty accurate.  The process hasn’t brought up issues we weren’t aware of. We know our underlying IT infrastructure needs “sorting”, we starting to do that. What is has done is to illustrate some potential areas to help us focus our next steps. In a sense it has helped us not to see forest from the trees, but rather show some twinkling lights and pathways through the forest.

All dashboards but no (meaningful) data – more on our #learninganalytics journey

Back in March I blogged about the start of our journey here at GCU into learning analytics. We had just produced our annual blended learning report which had some headline stats around learning and teaching activity. As I said in the post, the figures we are getting are not that accurate and extracting and making sense of the data from numerous sources has been no mean feat for those involved.  Since then we have been making good progress in moving things forward internally so this post is really an update on where we are.

When I was working on the Cetis Analytics Series, I remember Jean Mutton, University of Derby, telling me about the power of “data driven conversations”.  I have a far greater understanding of exactly what she meant by that.  Since instigating  initial discussions about the where, what, why, when, who and how of our data we’ve been having some really productive discussions, mainly with our IS department and most importantly with one of our Business Analysts, Ken Fraser, who is now my new BFF 🙂

Ken has totally risen to our data challenge and has been exploring our data sets and sprinkling a bit of BI magic over things. Like many institutions, we populate our VLE automagically via our student record system. It is a key source of data and really our primary data source for student information. However actual student activity is recorded in other systems, primarily our VLE. We haven’t quite cracked the automagic feedback of assessments from the VLE back into our SRS – but again I don’t think we’re alone there. So any meaningful analytics process(es) needs to draw on both of these data sources (as well as a number of other ones but that’s for another post).

We also take a snapshot of our VLE activity every night, which Ken has been churning into a datastore, which has been quickly filling up and seeing what he can extract.  Using Oracle BI systems he has been able to develop a number of dashboards far quicker than I expected. But, and there’s always a but, they are next to meaningless as the data we are extracting in our snapshot is pretty meaningless e.g we can get total number of users, but it looks like the total number of users we’ve had on the system since it was installed. It is also not a real time process. That’s not a huge issue just now, but we know we have the tools to allow real time reporting and ideally that’s what we are aiming for.

So we are now exploring the tables in our snapshot from the VLE to see if we can get a more useful data extraction. and thinking about how/if we can normalise the data, and make more robust connections to/from our primary data source the student record system. This is also raising a number of wider issues about our data/information management process. The cycle of data driven conversations is well and truly in motion.

In terms of learning analytics we are really at the exploratory meso stage just now.  We are also getting lots of support from Bb too which is very encouraging.  It may well be that using their system will be the most useful and cost effective solution in the long run in terms of learning analytics. However I don’t think we can make that decision until we really understand ourselves what data we have access to, what we can do with it given our resources, and what we want to do with it. Then I can get back to ranting about big data and thinking about the really important stuff like just what is learning analytics anyway?

css.php