Back in March I blogged about the start of our journey here at GCU into learning analytics. We had just produced our annual blended learning report which had some headline stats around learning and teaching activity. As I said in the post, the figures we are getting are not that accurate and extracting and making sense of the data from numerous sources has been no mean feat for those involved. Since then we have been making good progress in moving things forward internally so this post is really an update on where we are.
When I was working on the Cetis Analytics Series, I remember Jean Mutton, University of Derby, telling me about the power of “data driven conversations”. I have a far greater understanding of exactly what she meant by that. Since instigating initial discussions about the where, what, why, when, who and how of our data we’ve been having some really productive discussions, mainly with our IS department and most importantly with one of our Business Analysts, Ken Fraser, who is now my new BFF 🙂
Ken has totally risen to our data challenge and has been exploring our data sets and sprinkling a bit of BI magic over things. Like many institutions, we populate our VLE automagically via our student record system. It is a key source of data and really our primary data source for student information. However actual student activity is recorded in other systems, primarily our VLE. We haven’t quite cracked the automagic feedback of assessments from the VLE back into our SRS – but again I don’t think we’re alone there. So any meaningful analytics process(es) needs to draw on both of these data sources (as well as a number of other ones but that’s for another post).
We also take a snapshot of our VLE activity every night, which Ken has been churning into a datastore, which has been quickly filling up and seeing what he can extract. Using Oracle BI systems he has been able to develop a number of dashboards far quicker than I expected. But, and there’s always a but, they are next to meaningless as the data we are extracting in our snapshot is pretty meaningless e.g we can get total number of users, but it looks like the total number of users we’ve had on the system since it was installed. It is also not a real time process. That’s not a huge issue just now, but we know we have the tools to allow real time reporting and ideally that’s what we are aiming for.
So we are now exploring the tables in our snapshot from the VLE to see if we can get a more useful data extraction. and thinking about how/if we can normalise the data, and make more robust connections to/from our primary data source the student record system. This is also raising a number of wider issues about our data/information management process. The cycle of data driven conversations is well and truly in motion.
In terms of learning analytics we are really at the exploratory meso stage just now. We are also getting lots of support from Bb too which is very encouraging. It may well be that using their system will be the most useful and cost effective solution in the long run in terms of learning analytics. However I don’t think we can make that decision until we really understand ourselves what data we have access to, what we can do with it given our resources, and what we want to do with it. Then I can get back to ranting about big data and thinking about the really important stuff like just what is learning analytics anyway?