We need to make more mistakes – MUVEs session @ CETIS conference

Making mistakes and sharing experiences was one of the key points made by Mark Bell at the MUVEs (multi-user virtual environments) session at the JISC CETIS conference last week.

The aim of the session was to take a closer look at some of the issues emerging in this area, “including an examination of the range of systems available, technical interoperability and the current and future challenges it poses, and whether there’s more to teaching in MUVEs than hype…” The three presenters (Daniel Livingstone, Mark Bell and Sarah Robbins) shared their experiences of working in such environments, the challenges they’ve faced and the potential for the future.

Daniel Livingstone (University of Paisley) started the session with a presentation about a SLOODLE ( Second Life and Moodle) project he is currently working (funded by Eduserv). The SLOODLE project is exploring integrating the two enviroments to see if they can offer a richer learning and teaching experience when they are combined than they currently do individually. So they are exploring how, why and where you would want a 3-D representation of a moodle course, what bits of each need to be used at what stage etc. For example should assignments be posted in SL or to in Moodle? Although Moodle is the primary focus, Daniel did explain that the project is now beginning to think in a more generic fashion about the applicability of their scripts for other environments, but this more interoperable approach is at a very early stage. At the moment the key challenges for the project are: authentication between environments and how to ensure roles are propagated properly; the need to support flexibility and what they can add to moodle to make sloodle more ‘standard’ in terms of features that can be exported into SL and vice versa.

Mark Bell (Indianna University) then gave a presentation on his research and experiences of developing rich, multiuser experiences within an educational context. The over-riding message Mark gave us was that mistakes are being made in this area, but we need to make more and share our experiences so we can all learn from them and move our practice forward. Mark has been involved in a number of projects trying to create rich and complex multiuser environments and he gave a very honest evaluation of the mistakes that had been made – like an environment not being able to support more than one avatar which kind of defeats the point of a MUVE 🙂

Mark’s research is looking at testing economic theories within virtual worlds and he used the analogy of microbiologists using petrie dishes then extrapolating out findings to describe their approach to research within virtual worlds. According to Mark, there is no such thing as the real world anymore as the boundaries between real and virtual are becoming more blurred. I’m not sure if I can fully go along with that theory – but maybe that’s more to do with my personal virtual world ludditeness.

Mark argued that currently there isn’t a good platform available for academic researchers to develop large scale virtual games/simulations and that the academic development model doesn’t fit into the industry way of building things (one or two part-time developers versus teams of full time ones). So what is needed are more small scale projects/experiments – not the creation of new vast worlds and more work on co-creation and working with the commercial sector.

After the break Sarah Robbins (Ball State University) gave us an extremely informative description of her experiences of using Second Life to enhance her teaching and how harnessing students use of web 2.0 technology can enhance the learning process. One of the concepts she discussed was that of the ‘prosumer’ – the producer and consumer. With web 2.0 technologies we are all increasingly becoming prosumers and educators need to acknowledge and utilise this. Sarah was keen to stress that everything she does is driven by pedagogy not technology and she only uses technologies such as SL teach topics/concepts that are difficult to illustrate in a classroom setting. However being a keen gamer and user of technology she can see ways in which technology can enhance learning and wants to use new technologies wherever and whenever they are possible and appropriate. One example she highlighted was radius.im which is a mash up between a chat client, google maps and user profiles. A screen shot comparing that interface with a typical VLE chat client clearly illustrated how much richer the former is. Sarah’s vision of the future learning environment(s) being some kind of mash-up between things like twitter, facebook and secondlife which allowed everyone to benefit from the opportunities afforded by participatory and immersive networks.

There is clearly lots of interest in MUVEs in education, but we are still are the early stages of discovering what we can/can’t do with them. It would seem we are also just beginning to have the technical conversations about interoperability between systems and there is clearly a need for these issues to be discussed in as much depth as the pedagogical ones.

Copies of the presentations and podcasts are available from the conference website.

It only takes about half an hour . . .

said Tony Hirst as he took us on a mini journey of exploration of just a few of the mashups he has been creating with the OU OpenLearn content and (generally) freely available tools at the Mashup Market session at the JISC-CETIS conference yesterday. From creating the almost obligatory google map to mini federated searches to scrapping content for video, audio, urls to daily feeds of course content, Tony showed just some of the possibilities mash-up technologies can offer educators. He also highlighted how (relatively) simple these things are now and how little time (generally half an hour) it takes. He did concede that some half hours took a bit longer than others 🙂 A number of the tools Tony talked about are listed on the session conference webpage.

Of course, having well structured, open content has helped enormously to allow someone like Tony to begin to experiment. In terms of reusing content the content scraping that Tony has been doing was really exciting as it showed a simple way to get at the stuff that people (I think) would want to re-use – like videos, urls etc. Also, using an embedded iframe just now allows you to display just the video, not any surrounding advertising. However this may well change over time as advertising becomes more embedded into actual content.

So if it’s so simple to remix, reuse and republish content now, why aren’t we all doing it? Well partly I guess it’s down to people (teachers, learning technologists, students) actually knowing how and what they can do this. But also, there are other wider issues in terms around getting people/institutions to create and open up well structured data. Issues of privacy and our conceptions of what that actually means to us, students etc – particularly relevant given the current government debacle over lost data – and (as ever) IPR and copyright were discussed at length.

Clearly this implications of this type of technology challenges institutions not only in terms of what IT services for users they support, but also how and to whom they open their data to – if at all. Paul Walk suggested that institutions and individuals need to start with the non-contentious things first to show what can be done, without risk. Brian Kelly pointed out that there could be a tension between a mash-up based approach and a more structured semantic approach. Unfortunately this session clashed with the semantic technologies session; but maybe it’s a theme for next year’s conference or something we can explore at a SIG meeting in the coming months.

There was a really full and frank discussion around many issues, but generally there is a clear need for strategies to allow simple exposure of structured data, allow people to get to small pieces of data and easy tools to put it back together and republish in accessible ways. Again the need for clear guidelines around rights issues was highlighted. Some serious thought also needs to be given to the economic implications for our community of creating and sustaining truly open content.

Thoughts on OpenLearn 2007

Last week I attended the OU Openlearn conference in Milton Keynes. Presentations will be available from the conference website (agumented with audio recordings) as well as links to various blogs about the conference.

There were a couple of presentations I’d like to highlight. Firstly Tony Hirst’s on the use of RSS feeds and OPML bundles to distribute openlearn material really gave an insight into how easy it should be to create delivery mechanisms on demand from open content. I also really enjoyed Ray Corrigan’s talk “is there such a thing as sustainable infodiversity?” Ray highlighted a number of issues around sustainability of technology, energy consumption, disposable hardware. It’s all too easy to forget just how much of our natural resources are being consumed by all the technology which is so common place now. (As an aside, this was another conference where delegates were given a vast amount of paper as well as conference proceedings on a memory stick – something we are trying to avoid at the up coming JISC CETIS conference.) He also highlighted some of the recent applications of copyright laws that cut to the core of any ‘open’ movement. This view was nicely complimented by Eric Duval’s presentation where he encouraged the educational community to be more assertive and aggressive about copyright and use of materials for educational purposes – encouraging more of a ‘bring it on’ attitude. All well and good but only if academics have the security of institutional back up to do that. On that note it’s been interesting to see this weekend that the University of Oregon is refusing to give over names of students downloading music to the RIAA (see SlashDot for more information on that one).