All dashboards but no (meaningful) data – more on our #learninganalytics journey

Back in March I blogged about the start of our journey here at GCU into learning analytics. We had just produced our annual blended learning report which had some headline stats around learning and teaching activity. As I said in the post, the figures we are getting are not that accurate and extracting and making sense of the data from numerous sources has been no mean feat for those involved.  Since then we have been making good progress in moving things forward internally so this post is really an update on where we are.

When I was working on the Cetis Analytics Series, I remember Jean Mutton, University of Derby, telling me about the power of “data driven conversations”.  I have a far greater understanding of exactly what she meant by that.  Since instigating  initial discussions about the where, what, why, when, who and how of our data we’ve been having some really productive discussions, mainly with our IS department and most importantly with one of our Business Analysts, Ken Fraser, who is now my new BFF 🙂

Ken has totally risen to our data challenge and has been exploring our data sets and sprinkling a bit of BI magic over things. Like many institutions, we populate our VLE automagically via our student record system. It is a key source of data and really our primary data source for student information. However actual student activity is recorded in other systems, primarily our VLE. We haven’t quite cracked the automagic feedback of assessments from the VLE back into our SRS – but again I don’t think we’re alone there. So any meaningful analytics process(es) needs to draw on both of these data sources (as well as a number of other ones but that’s for another post).

We also take a snapshot of our VLE activity every night, which Ken has been churning into a datastore, which has been quickly filling up and seeing what he can extract.  Using Oracle BI systems he has been able to develop a number of dashboards far quicker than I expected. But, and there’s always a but, they are next to meaningless as the data we are extracting in our snapshot is pretty meaningless e.g we can get total number of users, but it looks like the total number of users we’ve had on the system since it was installed. It is also not a real time process. That’s not a huge issue just now, but we know we have the tools to allow real time reporting and ideally that’s what we are aiming for.

So we are now exploring the tables in our snapshot from the VLE to see if we can get a more useful data extraction. and thinking about how/if we can normalise the data, and make more robust connections to/from our primary data source the student record system. This is also raising a number of wider issues about our data/information management process. The cycle of data driven conversations is well and truly in motion.

In terms of learning analytics we are really at the exploratory meso stage just now.  We are also getting lots of support from Bb too which is very encouraging.  It may well be that using their system will be the most useful and cost effective solution in the long run in terms of learning analytics. However I don’t think we can make that decision until we really understand ourselves what data we have access to, what we can do with it given our resources, and what we want to do with it. Then I can get back to ranting about big data and thinking about the really important stuff like just what is learning analytics anyway?

Are we just all data end points?

I’ve had  two very contrasting data experiences this week which are both clarify and confusing my views on data and learning analytics.  Firstly there was the LACE (learning analytics community exchange) project webinar titled: Big Picture of Learning Analytics Interoperability. Brian Kelly has written up the event and his blog post contains a link to the recording.

If you think about it, interoperability is key to any kind of data and analytical work. However as the webinar explained, learning analytics has the added complication of the numerous levels and models it can work in and across. The project are very keen to engage stakeholders around concepts but I think they are suffering from the classic chicken and egg scenario just now. They want to engage with the community, but some of the abstract terms do make it difficult for the community (and I include myself here) to engage with, so they need real examples. However I’m not sure right now how I can engage with these large concepts. But in my next post where I’ll update on the work we;re doing here at GCU it might become clearer. I am very keen to be part/track this community so I guess I need to try harder to engage with the higher level concepts.

Anyway, as you’ll know, dear reader, I have been experimenting with visual note taking so used the webinar yesterday to do just that. It’s an interesting experience as it does make you listen in a different way. Asking questions is also kind of hard when you are trying to capture the wider conversation. This is my view naive of the webinar.

Visual notes from LACE webinar
Visual notes from LACE webinar

In contrast, the University of Edinburgh’s “Digital Scholarship Day of Ideas : Data” had a line up of speakers looking at data in quite a different way.  Luckily for me, and others, the event was live streamed and the recording will be available over the next few days on the website.  Also Nicola Osborne was in attendance and live blogging – well worth a read whilst waiting for the videos to be uploaded.

A common theme for most of the speakers was exploration of the assumption that data is neutral.  Being a digital humanities conference that’s hardly surprising, but there were key message coming through that I wish every wannabe and self proclaimed “big data guru”, could be exposed to and take head of. Data isn’t neutral, and just because you put “big” it front of it doesn’t change that.  It is always filtered and not always in a good way. I loved how Annette Markham described how advertisers can use data to flatten and equalise human experience, and her point that not all human experiences can be reduced to data end points however much advertisers selling an increasingly homogenised, consumerist view of the world want it to be.

This resonated in particular with me as I continue to develop my thoughts around learning analytics. I don’t want to (or believe that you can) reduce learning to data end points that have a set of algorithms which can “fix” thing i.e. learner behaviour. But at the same time I do believe that we can make more use of the data we do collect to help us understand what is going on, what works, what doesn’t and allow us to ask more questions around our learning environments. And by that I mean a  holistic view of learning environment that the individual develops themselves as much as the physical and digital environments they find themselves in.  I don’t want a homogenised education system, but at the same time I want to believe that using data more effectively could allow our heterogeneity to flourish.  Or am I just kidding myself? I think I need to have a nice cup of tea and think about this more. In the meantime I’d love to hear any views you may have.

 

Exploring the digital university – next steps digital university ecosystems?

Regular readers of this (and my previous) blog, will know that exploring the notion of just what a digital university is, c/should be is an ongoing interest of mine. Over the past couple of years now my colleague Bill Johston and I have shared our thinking around the development of a model to explore notions of the digital university. The original series of blog posts got very high viewing figures and generated quite a bit of discussion via comments. We’ve developed the posts into a number of conference presentations and papers. But the most exciting and rewarding development was when Keith Smyth from Edinburgh’s Napier University contacted us about the posts in relation their strategic thinking and development around their digital future. Which in turn will help them to figure out what their vision of digital university will look like.

For the past year Bill and I have been critical friends to Napier’s Digital Futures Working Group. This cross institutional group was tasked with reviewing current practice and areas of activity relating to digital engagement, innovation and digital skills development, and with identifying short term initiatives to build on current practice as well as proposing possible future developments and opportunities. These will be shared by Napier over the coming months. Being part of the Napier initiative has encouraged me to try and develop a similar approach here at GCU.  I’m delighted that we have got senior management backing and later this month we’ll be running a one day consultation event here.

Earlier this week Bill, Keith and myself had a catch up where we spent quite a bit of time reflecting on “our journey” so far.  Partly this was because we have another couple of conference paper submissions we want to prepare.  Also as we now have a very rich set of findings from the Napier experience we needed to think about  our next steps. What can we at GCU learn from the Napier consultation experience? What are the next steps for both institutions? What common issues will emerge? What common solutions/decision points will emerge?  What are the best ways to share our findings internally and externally?

As we reflected on where we started we (well, to be precise, Bill) began to sketch out a kind of process map of where we started (which was a number of lengthy conversations in the staff kitchen between Bill and I) to where we might be this time next year, when hopefully we will have set of actions from GCU.

The diagram below is an attempt to replicate Bill’s diagram and outline the phases we have gone through so far. Starting with conversations, which evolved into a series of blogs posts, which evolved in conference papers/presentation, the blog posts were spotted by Keith and used as a basis for the development of their Digital Futures Working group, which is now being used as an exemplar for work beginning here at GCU.

Stages of the Digital University Conversation

I am more and more convinced that one of the key distinguishing features of a digital university is the ability of staff and students to have a commonly shared articulation and experience of the digitally enabled processes they engage with on a daily basis, and equally a shared understanding of what would be missing if these processes weren’t being digitally enabled. You know, the digital day of student, lecturer, admin person type of thing, but not visions written by “futurologists”, ones written by our staff and students.  Alongside this we could have the daily live of the physical spaces that we are using. So for example we could have overlays of buildings not only showing the footfall of people but also where and when they were accessing our wifi next works etc.

Now, I know we can/could do this already (for example we already show access/availability of computers in our labs via our website) and/or make pretty good educated guesses about what is happening in general terms. However it is becoming easier to get more data and more importantly visualise it in ways that encourage questions around “actionable insights’ not only for our digital spaces, digital infrastructure but our physical ones too. Knowing and sharing the institutional digital footprint is again central to the notion of digital university.

Alongside this, by using learning analytic techniques can we start to make see any correlations around where and why students are online? Can we understand and learn from patterns around access and engagement with learning activities?  Are students are using our uni provided spaces and wifi to do the majority of their uni work or to download “stuff” to listen/watch/read to on the bus? Are they just accessing specialist software/kit? Does it matter if they all have Facebook/youtube/whatsapp open all the time if we are confident (through our enhanced data driven insights) that they are successfully engaging with our programmes and that they have the digital literacy skills to connect and collaborate with the right people in the right spaces (both on and offline)?

As we were talking one word kept coming.  It’s maybe a bit old fashioned, I know they were all the rage a few years ago particularly in the repository sphere, but we did think that mapping the ecosystem of a digital university could be the next logical step. The ecosystem wouldn’t just be about the technology, infrastructure and data but the people and processes too.  Via the the SoLar discussion list I discovered the  Critical Questions for Big Data  article by Danah Boyd and Kate Crawford. As part of their conclusions they write:

“Manovich (2011) writes of three classes of people in the realm of Big Data: ‘those who create data (both consciously and by leaving digital footprints), those who have the means to collect it, and those who have expertise to analyze it’. We know that the last group is the smallest, and the most privileged: they are also the ones who get to determine the rules about how Big Data will be used, and who gets to participate.”

In terms of a digital university, I think we need to be doing our utmost to ensure we are extending membership of that third group, but just now there is a need to raise awareness to all about how and where their data is being collected and to give them a voice in terms of what they think is the best use of it.

What a digital university will actually look like will probably not differ that much from what a university looks like today, what will distinguish it will be the what happens within it and how everyone in that university interacts and shares through a myriad of digitally enabled processes.

css.php