CETIS 09 the video – some thoughts on the process

Regular visitors to the CETIS website may have noticed that we now have a video from the CETIS09 conference on the front page. As the content consists of “talking heads” from delegates, we hope that it gives a flavour as to why people came to the conference and what their expectations and overall impressions of the event were.

Although we have traditionally, and continue, to get feedback via feedback questionnaires we have been toying with the idea of using video to capture some more anecdotal feedback for sometime now. The old adage of a picture being worth a thousand words rings particularly true for an organization such as CETIS. It can take quite a while to explain what we are, what we do, and most importantly what impact we have on others – hell it can take about five minutes to even say Centre for Educational Technology and Interoperability Standards 🙂 So, using video has the potential to let others explain the benefits of what we do e.g. why do people take two days out of busy schedules to go to our annual conference?

However, as with anything getting to the point of the final video has been a bit of a journey which started as these things often do with a serendipitous meeting. Mark Magnate from 55 Degrees, had a meeting with my colleague Rowin Young about assessment related activities which I joined and during the course of the meeting he talked about the Voxur video capture system they had been developing. One thing led to another, and we decided that this “lite” video system might just provide a way for us to actually start getting some video feedback from our community and the obvious place to start was at our conference.

There are a number of video capture booths/systems on the market now, but the things that I particularly liked about the Voxur system, were:
*Size – it’s small, basically a macbook in a bright yellow flight case with a bit of additional built in lighting. So, it doesn’t need much space – just a table and somewhere relatively quiet.
*level of user control -a take is only saved when the person speaking is happy with it and they choose to move on. As it is basically an adapted laptop it looks pretty familiar to most people too.
*Editing – the user control above means that you don’t have all the “outtakes” to sift through and the system automatically tags and related answers. There are still of course editing decisions to be made but initial sifting time is cut down dramatically.
*Q&A style. With this system you have the option to have a real person record and ask questions so people aren’t just reading a question on screen then responding. Hearing and seeing someone ask you a question is a bit more personal and engaging.

In terms of actually preparing for using the system at the conference, the most time consuming, head scratching part was actually getting a set of questions which people would be able to answer. Making the switch from getting written to spoken answers did take some time. Also we had to bear in mind that this wasn’t like an interview where you could interject and ask additional/follow up questions. Once someone is sitting in front of the laptop they just work their way through the set of questions. In the end I know that we did leave in a couple of quite challenging questions – but the responses we got were fantastic and we didn’t have to bribe anyone.

During the event, as it was our first time using the system we did have Mark “manning” the box. And this is something we will continue to do really just to explain to people how the system works, and basically to reassure people that all they really have to do is hit the space bar. We had to do a wee bit of persuading to get people to come in but overall mostly people we asked were happy to take part. A mixture of natural curiosity and not scared of technology traits from our delegates probably did help.

It was quite a learning curve, but not too extreme and hopefully it is something that we can build into future events as a way to augment our other feedback channels.

The headless VLE (and other approaches to composing learning environments)

CETIS conferences are always a great opportunity to get new perspectives and views around technology. This year it was Ross MacKenzie’s somewhat pithy, but actually pretty accurate “so what you’re really talking about is a headless VLE” during the Composing Your Learning Environment sessions that has resonated with me the most.

During the sessions we explored 5 models for creating a distributed learning environment. :
1 – system in the cloud, many outlets
2 – plug-in to VLEs
3 – many widgets from the web into one widget container
4 – many providers and many clients
5 – both a provider and a client
Unusually for a CETIS conference, the models were based on technologies and implementations that are available now. (A PDF containing diagrams for each of the systems is available for download here)

Warwick Bailey (Icodeon) started the presentations by giving a range of demo of the Icodeon Common Cartridge platform. Warwick showed us examples the plug-ins to existing VLEs model. Using content stored as IMS Common Cartridges and utilising IMS LTI and web services, Warwick illustrated a number of options for deploying content. By creating a unique url for each resource in the cartridge, it is possible to embed specific sections of content onto a range of platforms. So, although the content maybe stored in a VLE users can choose where they want to display the content – a blog, wiki, web-page, facebook, ebooks etc. Hence the headless VLE quote. Examples can been seen on the Icodeon blog. Although Warwick showed an example of an assessment resource (created using IMS QTI of course) they are still working on a way to feed user responses back to the main system. However he clearly showed how you can extend a learning environment through the use of plug-ins and how by identifying individual content resources you can allow for maximum flexibility in terms of deployment.

Chuck Severance then gave us an overview IMS Basic LTI and his vision for it (model 2). Describing Basic LTI as his “escape route” from existing LMSs. LTI allows an LMS to launch an external tool and securely provide user identity, course information, and role information to that tool. It uses a HTTP POST through the browser, secured by the OAuth security. This tied in nicely with Warwick’s earlier demo of exactly that. Chuck explained his visions of how LTI could provide the plumbing to allow new tools to be integrated into existing environments. As well as the Icodeon player, there is progress being made with a number of systems including Moodle, Sakai and Desire2Learn. Also highlighted was the Blackboard building block and powerlink from by Stephen Vickers (Edinburgh University).

Chuck hopes that by providing vendors with an easy to implement spec, we will be able to get to the stage where there are many more tools available for teachers and learning to allow them to be real innovative when creating their teaching and learning experiences.

Tony Toole then presented an alternative approach to building a learning (and/or teaching) environment using readily (and generally free or low cost) available web 2 tools (model 3). Tony has been exploring using tools such as Wetpaint, Ning, PBworks in creating aggregation sites with embed functionality. For example Tony showed us an art history course page he has been building with with Oxford University, that pulls in resources such as videos from museums, photos from flickr streams etc. Tony has also be investigating the use of conference tools such as Flash meeting. One of the strengths of this approach is that it takes a relatively short time to pull together resources (maybe a couple of hours). Of course a key draw back is that these tools aren’t integrated with existing institutional systems and more work on authorization integration is needed. However the ability to quickly show teachers and learners the potential for creating alternative ways to aggregate content in a single space is clearly evident, and imho, very appealing.

Our last presentation of day one came from Stuart Sim who showed us the plugjam system he has been developing (another version of model 1). Using a combination of open educational standards such as IMS LTI and CC, and open APIs, plugjam allows faculties to provide access to information in a variety of platforms. The key driver for developing this platform is to help ‘free’ data trapped in various places within an institution and make it available at the most useful point of delivery for staff and students.

So, after an overnight break involving uncooked roast potatoes (you probably had to be at the conference dinner to appreciate that:-) we stared the second half of our session with a presentation from Scott Wilson (CETIS and University of Bolton) on the development of the Wookie widget server and it’s integration into the Bolton Moodle installation (another version of model 1). More information about Wookie and its Apache Incubator status is available here. In contrast to a number of the approaches demoed in the previous session, Scott emphasised that they had chosen not to go down the LTI road as it wasn’t a generic enough specification. By choosing the W3C widget approach, they were able to build a service which provides much greater flexibility to build widgets which can be deployed in multiple platforms and utilise other developments such as the Bondi security framework .

Pat Parslow, University of Reading, then followed with a demo of Google Wave (model 4) and showed some of the experimental work he has been doing incorporating various bots and using it as a collaborative writing tool. Pat also shared some of his thoughts about how it could potentially be used to submit assignments through the use of private waves. However although there is potential he did emphasise that we need much more practice to effectively judge the affordances of using it in an educational setting. Although the freedom it gives is attractive in one sense, in an educational setting that freedom could be its undoing.

We then split into groups to discuss each merits of each of the models and do a ‘lite’ swot analysis of each of them. And the result? Well as ever no one model came out on top. Each one had various strengths and weaknesses and a model 6 taking the best bits of each one was proposed by one group. Interestingly, tho’ probably unsurprising, authentication was the most common risk. This did rise to an interesting discussion in my group about the fact that maybe we worry too much about authentication where and why we need it – but that’s a whole other blog post.

Another weakness was the lack of ability to sequence content to learners in spaces like blogs and wikis. Mind you, as a lot of content is fairly linear anyway that might not be too much of a problem for some:-) The view of students was also raised. Although we “in the know” in the learning technology community are chomping at the bit to destroy the last vestiges of the VLE as we know it, we have to remember that lots of students actually like them, don’t have iphones, don’t use RSS, don’t want to have their facebook space invaded by lecturers and value the fact that they can go to one place and find all the stuff related to their course.

We didn’t quite get round to model 5 but the new versions of Sakai and Blackboard seem to be heading in that direction. However, maybe for the rest of us, the next step will be to try being headless for a while.

Presentations and models from the session are available here.

css.php