Where Sheila's been this week – codesigning next gen learning environments with Jisc

You may or may not be aware of Jisc’s current co-design consultation exercise with the HE/FE sector.  The co-design approach is a way to try and ensure that Jisc developments are supportive and representative of the needs of the sector.   Building on feedback from the first iteration of the process, this time around there has been a concerted effort to get wider sectoral involvement in the process through various methods, including social media, blog posts, tweet chats and voting.

Yesterday, along with about 30 others, I attended a face to face meeting to explore, review and discuss the results of the process and feedback on the six “big” challenges identified by Jisc.

  1. What does the imminent arrival of the intelligent campus mean for universities and colleges?
  2. What should the next generation of digital learning environments do?
  3. What should a next-generation research environment look like?
  4. Which skills do people need to prepare for research practice now and in the future?
  5. What would truly digital apprenticeships look like?
  6. How can we use data to improve teaching and learning?

You can see the results of the voting here too.

The voting process does need some refinement as Andy McGregor was clear to point out, and we really used it and the comments as a guide for the discussions.  Personally I found the voting process a bit cumbersome – having to fill out a google doc for each one. I can see why Jisc wanted to get all that information but I would have preferred something a bit more instant with the option of giving more detailed information. That might have encouraged me to cast more than one vote  . . .

I joined the next generation learning environments discussion. I had been quite taken with the pop up VLE notion but as the discussion evolved it became clearer to me that actually the idea articulated so well by Simon Thomson (Leeds Beckett Uni) of connecting institutional and user owned tech was actually a much stronger proposition and in a way the pop up VLE would fall out of that.

The concept is really building on the way that IFTT (if this then than) works, however with a focus on connecting institutional systems to personal ones.  Please, please read Simon’s post as it explains the rationale so clearly.   I use IFTT and love the simplicity of being able to connect my various online spaces and tools, and extending that into institutional systems seems almost a no brainer.

We talked about space a lot in our discussion, personal space, institutional space etc (Dave White has a good post on spaces which relates to this).  For both staff and students it can be quite complex to manage, interact in and understand these spaces.

We (teachers, support staff,  IT, the institution) are often a bit of obsessed with controlling spaces.  We do need to ensure safety and duty of care issues are dealt with but activity around learning doesn’t always need to take place in our spaces e.g. the VLE.  Equally we (staff) shouldn’t feel that they have to be in all the spaces where students maybe talking about learning. If students want to discuss their group activity on snapchat, what’s app, Facebook then let them. They can manage their interaction in those spaces. What we need to be clear on is the learning activity, the key interactions and expectations of outputs and in which spaces the learning activities/outputs need to be.  The more connected approach advocated by Simon could allow greater ease of connection between spaces for both staff and students.

Providing this type of  architecture (basically building and sharing more open APIs)  is not trying to replace a VLE, portfolio system etc, but actually allowing for greater choice around engagement in,  and sharing of,  learning activity. If I like writing in evernote (as I do) why can’t I just post something directly into a discussion forum in our VLE? Similarly if our students (as ours do) have access to one note and are using it, why can’t they choose to share their work directly into the VLE?  Why can’t I have my module reading lists easily saved into a google doc?

This isn’t trying to replace integrations such as LTI, or building blocks that bring systems/products  into systems. This is much more about personalisation and user choice  around notifications, connections and sharing into systems that you (need to) use.  It’s lightweight, not recreating any wheels but just allowing more choice.

So at  a university level you could have a set of basic connections (recipes) illustrating how students and staff and indeed the wider community could interact with institutionally provided systems, and then staff/students decided which ones (if any) they want to use, or create their own.  Ultimately it’s all about user choice. If you don’t want to connect that way then you don’t have to. It’s lightweight, not recreating any wheels but just allowing more choice

As well as helping to focus on actual learning activity, I would hope that this approach would helping institutions  to think about their core institutional provision, and “stuff that’s out there and we all are using” – aka byod.  It would also hopefully allow for greater ease of experimentation without having to get a system admin support to try something out in the VLE.

I would hope this would also help extend support and understanding of the need for non monolithic systems and get edtech vendors to build more flexible interaction/integration points.

Anyway hopefully there will be more soon from Jisc, and Simon actually has some funding to trying an build a small prototype based on this idea.  Jisc will also be sharing the next steps from all the ideas over the coming weeks.  Hopefully this idea is simple and agile enough to get into the Jisc R&D pipeline.

screen-shot-2017-02-03-at-09-41-08
Jisc R&D Pipeline

What Sheila's seen this week – #turnitofftuesday, 365 portals and a blended learning video

Like many other institutions we were hit by what we are calling #turnitofftuesday when Turnitin went down for a couple of hours on Tuesday afternoon.  As ever, this caused a lot of  stress for some of our students who were trying to submit assignments. As we had no notification from Turnitin about any service disruption we didn’t know about it until students contacted the help desk.

For us, this highlights the need for more transparency and guidelines for staff and students in this type of situation. Sometimes I think we need to have a large “don’t panic” button, for both staff and students, as there are a number of things we can do to quickly mitigate this type of situation. It’s also important to remember that for the vast majority of time the service has and is working – it’s just sod’s law that outages happen at peak times. That said, the lack of information from Turnitin has been disappointing to say the least. I know UCISA and HELF are “on the case” on behalf of the sector.  Developing guidelines around EMA (electronic management of assessment) is on our Blended Learning roadmap for this year, but we maybe need to move it up our agenda.

The tension between cloud hosted services and internally hosted is perennial, and at GCU we do rely on a number of hosted services related to learning and teaching, not least for our VLE. We are also embarking on an ambitious portal development programme called “Portals for All” which is based on Office365, and again relies on cloud based hosting. We will need to ensure that we do have transparent guidelines in place for service any service disruptions there too. Over this week I’ve been part of a number of meetings and discussion around functionality, data sources, time, culture changes (you know all the usual fun stuff). Our IS team and the external contractor have a pretty ambitious development schedule, but we should have a new student portal available by August.In the meantime if you have any experience of using 365 based portals I’d love to hear about them.

Since joining GCU one of the things I’ve been trying to do is to identify and to map all the different systems we use to core learning and teaching functionality. Unsurprisingly we already have a lot of duplication of services/functionality and the Office 365 platform offers yet more. So it is going to be really important for us to work with our IS colleagues to ensure that students and staff have a clear understanding of our core provision and support. Getting the balance between trusted, reliable services with flexibility to experiment is going to be crucial.  As part of this I’ve  developed a simplified model for blended learning which highlights some of the practices and systems we are currently using. This will be augmented with a number of cases studies. This short video gives an overview.

Say hello to Archi

CETIS has developed a free, open source, cross platform ArchiMate modelling tool, Archie, which is now available for download @ http://archi.cetis.ac.uk/.

The tool creates models using the ArchiMate modelling language. As described on the site, the tool has been developed primarily for the “newcomer to ArchiMate and not an experienced modeller. They do not intend to become a “modeller” per se, nor to be an “Enterprise Architect” but to borrow and apply techniques or Architecture modelling in piecemeal (often opportunistic) IT developments in a mixed HE/FE institution. The Archi user is interested in connecting IT developments to institutional strategy . . .”

The team would really welcome feedback on the tool and have set up a forum area on the site for community contributions. So, if you have any thoughts, please post them into the forum. They will all help towards further development of the tool and user guides.

2nd Linked Data Meetup London

Co-located with the dev8D, the JISC Developer Days event, this week, I along with about 150 others gathered at UCL for a the 2nd Linked Data Meetup London.

Over the past year or so the concept and use of linked data seems to be gaining more and more traction. At CETIS we’ve been skirting around the edges of semantic technologies for some time – tying to explore realization of the vision particularly for the teaching and learning community. Most recently with our semantic technologies working group. Lorna’s blog post from the last meeting of the group summarized some potential activity areas we could be involved in.

The day started with a short presentation from Tom Heath, Talis, who set the scene by giving an overview of the linked data view of the web. He described it as a move away from the document centric view to a more exploratory one – the web of things. These “things” are commonly described, identified and shared. He outlined 10 task with potential for linked data and put forward a case for how linked data could enhance each one. E.g. locating – just now we can find a place, say Aberdeen, however using linked data allows us to begin to disambiguate the concept of Aberdeen for our own context(s). Also sharing content, with a linked data approach, we just need to be able to share and link to (persistent) identifiers and not worry about how we can move content around. According to Tom, the document centric metaphor of the web hides information in documents and limits our imagination in terms of what we could do/how we could use that information.

The next presentation was from Tom Scott, BBC who illustrated some key linked data concepts being exploited by the BBC’s Wildlife Finder website. The site allows people to make their own “wildlife journeys”, by allowing them to explore the natural world in their own context. It also allows the BBC to, in the nicest possible way, “pimp” their own progamme archives. Almost all the data on the site, comes from other sources either on the BBC or the wider web (e.g. WWF, Wikipedia). As well as using wikipedia their editorial team are feeding back into the wikipedia knowledge base – a virtuous circle of information sharing. Which worked well in this instance and subject area, but I have a feeling that it might not always be the case. I know I’ve had my run-ins with wikipedia editors over content.

They have used DBPedia as a controlled vocabulary. However as it only provides identifiers, and no structure they have built their own graph to link content and concepts together. There should be RDF available from their site now – it was going live yesterday. Their ontology is available online.

Next we had John Sheridan and Jeni Tennison from data.gov.uk. They very aptly conceptualised their presentation around a wild-west pioneer theme. They took us through how they are staking their claim, laying tracks for others to follow and outlined the civil wars they don’t want to fight. As they pointed out we’re all pioneers in this area and at early stages of development/deployment.

The data.gov.org project wants to:
* to develop social capital and improve delivery of public service
*make progress and leave legacy for the future
*use open standards
*look at approaches to publishing data in a distributed way

Like most people (and from my perspective, the teaching and learning community in particular) they are looking for, to continue with the western theme, the “Winchester ’73” for linked data. Just now they are investigating creating (simple) design patterns for linked data publishing to see what can be easily reproduced. I really liked their “brutally pragmatic and practical” approach. Particularly in terms of developing simple patterns which can be re-tooled in order to allow the “rich seams” of government data to be used e.g. tools to create linked data from Excel. Provenance and trust is recognised as being critical and they are working with the W3C provenance group. Jeni also pointed that data needs to be easy to query and process – we all neglect usability of data at our peril. There was quite a bit of discussion about trust and John emphasised that the data.gov.uk initiative was about public and not personal data.

Lin Clark then gave an overview of the RDF capabilities of the Drupal content managment system. For example it has default RDF settings and FOAF capability built in. The latest version now has an RDF mapping user interface which can be set up to offer up SPARQL end points. A nice example of the “out of the box” functionality which is needed for general uptake of linked data principles.

The morning finished with a panel session where some of key issues raised through the morning presentations were discussed in a bit more depth. In terms of technical barriers, Ian Davies (CEO, Talis) said that there needs to be a mind shift for application development from one centralised database to one where multiple apps access multiple data stores. But as Tom Scott pointed out it if if you start with things people care about and create URIs for them, then a linked approach is much more intuitive, it is “insanely easy to convert HTML into RDF “. It was generally agreed that the identifying of real world “things”, modelling and linking of data was the really hard bit. After that, publishing is relatively straightforward.

The afternoon consisted of a number of themed workshops which were mainly discussions around the issues people are grappling with just now. I think for me the human/cultural issues are crucial, particularly provenance and trust. If linked data is to gain more traction in any kind of organisation, we need to foster a “good data in, good data out” philosophy and move away from the fear of exposing data. We also need to ensure that people understand that taking a linked data approach doesn’t automatically presume that you are going to make that data available outwith your organisation. It can help with internal information sharing/knowledge building too. Of course what we need are more killer examples or winchester 73s. Hopefully over the past couple of days at Dev8 progress will have been made towards those killer apps or at least some lethal bullets.

The meet up was a great opportunity to share experiences with people from a range of sectors about their ideas and approaches to linked data. My colleague Wilbert Kraan has also blogged about his experiments with some of our data about JISC funded projects.

For an overview of the current situation in UK HE, it was timely that Paul Miller’s Linked Data Horizon Scan for JISC was published on Wednesday too.

Blackboard moving towards IMS standards integration

Via Downes this morning, I came across Ray Henderson’s Blackboard’s Open Standards Commitments: Progess made blog post. Ray gives a summary of the work being done with IMS Common Cartridge and IMS LTI.

Having BB onboard in developments to truly “free the content” (as is the promise of such standards as IMS CC) is a major plus for implementation and adoption. From a UK perspective it’s good to see that implementation is being driven by one of our own – Stephen Vickers from the University of Edinburgh who has been developing powerlinks for BB and basic LTI. See the OSCELOT website for more information.

There are a number of models now emerging which show how learning environments can become more distributed. If you are interested in this area, we are holding a meeting in Birmingham on 4th March to discuss the notion of the distributed learning environment. There will be demos of a number of systems and we’ll also be launching a briefing paper new approaches to composing learning environments. More information including a link to register for the event is available here.

css.php