ALT-C 2007 – moving from 'e' to 'p'?

Another year, another ALT-C . . . as usual this year’s conference was a great opportunity to catch up with colleagues, see and hear some new things, and some not quite so new things. There has been a lot of coverage of this year’s conference and ALT-C themselves have produced a RSS feed aggregating blogs of people who have commented on the conference – nice to see an another useful example of mash up technology.

One of the overriding messages I took away from the conference was the move from talking about ‘e’ learning initiatives to more discussions about the issues surrounding the process of learning – presence, persistence and play to name a few.

It was great to see so many projects from JISC’s Design for Learning programme presenting. I couldn’t get to see all the presentations but I did go to a couple of the more evaluation led projects (DeSila and eLidaCamel). Both projects are focusing on the practitioner experience of designing for learning and both highlight the strengths and weaknesses of the current tools and the need for more support mechanisms to allow ‘ordinary’ teachers to use them. However both projects (and other findings from the programme) illustrate how engaging in dialogue around designing for learning can have an impact on practitioners as it really does make them reflect on their practice.

The first keynote, Dr Michelle Selinger (CISCO), reminded us all of the chasms that exist within education systems, between education and industry and of course the wider social, cultural and economic chasms which exist in the world today. Technology can provide mechanisms to start to bridge these gaps but it can’t do everything. We need to consider seriously how we take the relevant incremental steps towards achieving shared goals. Our education system(s) is key to providing opportunities for learners to gain the relevant global citizenship skills which industry is now looking for. If we really want lifelong learners then we need to ensure that the relevant systems (such as eportfolios) are interoperating. Michelle also highlighted the need to move from the 3 ‘r’s to the 3 ‘p’s which she described as – persistence, power tools and play. The challenge to all involved in education is how to allow this shift to occur. The final chasm Michelle broached was assessment and the increasing chasm between what types of learners we ideally want (technology literate, lifelong learners, team workers) and the assessment systems that our political leaders impose onto us which really don’t promote any of these aspirations.

This led nicely onto the second key note from Professor Dylan Wiliam from the Institute of Education who gave a really engaging talk around issues of ‘pedagogies of engagement and of contingency classroom aggregation technologies’. Dylan gave an insightful overview of the challenges creating effective schools and creating quality control of learning – a huge challenge when we consider how chaotic a classroom really is. He then went on to describe some innovative ways where technology enhanced formative assessment techniques could help teachers to engage learners and creative effective learning environments – well worth a listen if you have the time.

The final key note came from Peter Norvig, Director of Research, Google. I have to say I was slightly disappointed that Peter didn’t give us some inside information on Google developments however he did give an entertaining talk around ‘learning in an an open world’. Taking us through a well illustrated history of education systems he highlighted the need for projects based on engaging real world scenarios which are explored through group tasks. Copies of all the keynotes (including audio) are available from the conference website.

This year also marked the first ALT-C Learning Object Competition (sponsored by Intrallect). The prize winners were announced at the conference dinner and full details are available on the Intrallect website.

SUMs = the eFramework?

Over the last year or so as the vision of the international eFramework as started to take shape I’ve been hearing more and more about SUMs (service usage models). I went along to the SUMs workshop to see if I could find out exactly what a SUM is.

The event was run by the international eFramework so we had the benefit of having Dan Rehak (consultant to the eFramework), Phil Nichols (one of the eFramework editors) and Lyle Winton (of DEST who has been involved in creating SUMs) facilitating the workshop. This was particularly useful (for me anyway) as it helped to distinguish the aims of the international eFramework from those of the partners involved. The partners in the international eFramework have common goals of interoperability and the use of service orientated approaches, but each country has their own priorities and interpretations of the framework. The eFramework does not mandate any one approach, it should be seen as a reference point for developers where proven technical interoperable scenarios are documented using a set of standard (hotly debated – for example ‘reference model’ has been blacklisted) terms. (Copies of Dan and Lyle’s presentations are available from the e-Framework website)

Although the aim of the day was to actually create some SUMs, we started with an overview from Dan Rehak on the eFramework and SUMs. Services provide the technical infrastructure to make things work – they describe interfaces between applications. A SUM is the description of the combination of services, which meet a specific requirement (or business need). So in some respects a SUM is analogous to a blueprint as it (should) describe the overall ‘business story’ (i.e. what it is supposed to do), with a technical description of the process(es) involved e.g. the services used, the bindings for service expressions and then examples of service implementations. Ideally a SUM should be developed by a community (e.g. JISC or a subset of JISC funded projects working in a specific domain area). That way it is hoped the best of top down (in terms of describing high level business need) and bottom up (in terms of having real instances of deployment) can be combined. I can see a role for JISC CETIS SIGs in helping to coordinate our communities in the development of SUMs.

At this point no official modelling language has been adopted for the description of SUMs. To an extent this will probably evolve naturally as communities begin to develop SUMs and submit them to the framework. Once a SUM has been developed it can be proposed to the eFramework SUM registry and hopefully it will be picked up, reused and/or extended by the wider eFramework community.

Some key points came out of a general discussion after Dan’s presentation:
*SUMs can be general or specific – but have to be one or the other.
*SUMs can be described in terms of other SUMs (particularly in the cases of established services such as open id and shibboleth).
*SUMs can be made up of overlapping or existing SUMs
*Hopefully some core SUMS will emerge which will describe widespread common reusable behaviours.

So what are the considerations for creating a SUM? Well there are three key areas – the description, the functionality and the structure. The description should provide a non-technical, narrative or executive summary of what the SUM does, what problem it solves and its intended function. The functionality should outline the individual functions provided within the SUM – but with no implementation details. The structure should give the technical view of the SUM as a whole, illustrate how the functions are integrated e.g. services, data sources, coordination of services. It can also have a diagrammatic illustration of any coordination. There are a number of SUMs available from the eFramework website as well as more detailed information on actually developing SUMs.

The main part of the workshop was devoted to group working where we actually tried to develop a SUM from a provided scenario. Unsurprisingly each group came up with very different pseudo SUMs. As we worked through the process the need for really clear and concise descriptions and clear boundaries on the number of services you really need became glaringly obvious. Also, although this type of business process may be of use for certain parts of our community, I’m not sure if it would be of use for all. It was agreed that there is a need for best practice guides to help contextualise the development and use of SUMs for different domains/communities. However that is a bit of a chicken and egg situation at the moment.

One very salient point was made by Howard Noble (University of Oxford) when he pointed out that maybe what we should be documenting are ‘anti-sums’ i.e. the things that we do now and the reasons why we take non soa approaches in certain circumstances. Hopefully as each community within the eFramework starts to build SUMs the potential benefits of collecting, documenting and sharing ways for people, systems and services to interoperate will outweigh other approaches. But what is needed most of all (imho) are more real SUMs so that that developers can really start to see the usefulness of the eFramework SUMs approach.

JISC web2.0 online conference – presenations & discussions available online

All this week Tom Franklin and Mark Van Harleem are hosting an online conference on web2.0 and its potential impact on the education sector. Although places have been limited for the synchronous presentations, copies of the presentations are available on a moodle site, and anyone can participate in the discussion forums there ( you obviously have to register first to get access to the forums). So far the issues discussed have covered institutional issues, content creation and sharing and pedgagogy. Overall the live session are working well, with just the occassional gremlin. You can log-in and join the discussion @ http://moodle.cs.man.ac.uk/web2/course/view.php?id=3.

BBC Jam suspended

A couple of weeks ago I wrote about BBC Jam after they presented at the Intrallect Future Visions Conference. Today the BBC Trust have announced they are suspending the service from 20th March after after complaints from commercial companies received by the European Commission. It’s always been controversial project, with many commercial vendors complaining about the amount of funding being put into the project and the impact it may have on their business.

I know that this doesn’t really have direct relevance to us in the HE sector, however I do think there are similarities between the BBC and British universities. Neither were set up as commercial companies, but increasingly they are having to adapt their structures to become more and more commercially viable. They are both affected by changes in technology – particularly the web.

Call me old fashioned, but I do believe in the Reithian values of BBC to educate, inform and entertain and I’ve never minded paying my licence fee as I do believe the BBC give incredible value for money. There have been some arguments that the BBC’s online development is pushing out new start ups – which I’m not sure I totally agree as I think, that mostly the BBC webpresence has helped to set standards for web design and usability. I just hope that now considerable work has been done in producing resources (not all done in-house either) that the suspension is temporary and people will be able to access the content again soon.

Impact of Open Source Software on Education series launch

Earlier this week (12 March) Penn State announced the launch of a new series of biweekly postings on the impact of open source software on education on their Terra Incognita blog. Although the series is based around open source software, other related topics including open educational resources and open courseware will be discussed too, and all contributions/discussions will be made freely available:

” our intent to not only provide a rich resource on the theme of this series, but to also contribute to the larger movement of free content by making the resources that we create widely and freely available. In an effort to do so, a few days after each posting, the articles, discussion, and a brief summary will be reformatted and made available on WikiEducator as Open Educational Resources. It is our hope that these resources will take a life of their own as they are reused, modified, and returned to the community.”

The first article is from Ruth Sabeen (UCLA) about their evaluation process which resulted in them choosing Moodle. More information about the series including the schedudule is available @

http://www.wikieducator.org/Open_Source_Software_in_Education_Series_on_Terra_Incognita

A couple of future contributions which caught my eye include Wayne Mackintosh on Bridging the educational divide with free content and free software (7 April) and James Dalziel on pedagogy, technology and open source -experiences from LAMS (16 May).

Maybe this kind of approach would be useful for JISC/DEST to help with the development of the eFramework initiative.

JISC Workflows meeting (13/02/07)

The purpose of this meeting was to see if and where there are commonalities between workflows and to see if there are any common points between domain specific workflows.

The agenda was very full with six presentations from six very diverse projects (ISIS/ASSIS; RepoMMan, Human Collaborative workflow; ePHPX(escience);COVARM;Kuali).

Steve Jeyes and Warwick Bailey described there experiences of using IMS Simple Sequencing, QTI and BPEL. They were surprised at how easy it was to use BPEL. This was due partly to the Active Ends visual editor. Warwick did point out that more work needs to be done to clarify just what is valuable about using BPEL. He proposed that it might have something to do with the ease of use and the ability to have long running calls and the use of xpath; but he would like to seem more work done in this area. He also stressed the importance of xsds and how the skill of creating elegant, extensible xsds is really undervalued. At Icoden they have found the .NET toolkit easier to use than java, but he did point out that may just be a personal preference.

Steve Jeyes highlighted the problems his team had with using simple sequencing (or not so simple sequencing as it maybe should be called) and the need for more work to be done in terms of integrating standards and workflows.

Richard Green from RepoMMan project then outlined some of the workflow issues they have been grappling with in their project and within the wider institutional context. The University of Hull’s vision of a repostitory encompasses storage, access, management and preservation of a wide range of file types from concept to completion. A user survey highlighted that their system users (primarily researchers) wanted a safe place which could be accessed anywhere, anytime and had support for versioning. So they have been creating a toolset to manage workflows for users and they have found UML useful for creating basic workflows. They are also trying to add in as much automation to workflows as possible for example by pre-populating metadata fields by using JHOVE (which btw he seemed very excited about as it actually does seem to do a lot of pre-populating of fields) and trying to get as much as possible from other services.

Scott Wilson then looked at issues surrounding human collaborative workflows (the non BPEL stuff :-)). Scott outlined the work he had been doing at McQuarrie University in relation to collaborative research practice and the development of RAMS (research activity management system) from LAMS. They have been looking at learning design as potential workflow method as there hasn’t been a lot of work done around communication and collaborative methods as workflows. One common characteristic of the research process is that the process can change at various stages during the lifecyle and very few systems support the levels of flexiblitity at runtime that this requires ( this is also true of learning design systems). Scott also pointed out the risks of trying to develop these types of systems when compared to the actual benefits and how easy it could be to develop systems for experts rather than practitioners (again very similar to learning design). One of the key issues for this work is the fact that in collaborative settings, seemingly simple workflows can actually exhibit complex behaviour which again reinforces the need for adaptable systems. In Scott’s opinion, collaborative processes don’t lend themselves to todays business process model methods. But they are hoping that the RAMS system will be a step in the right direction.

Rob Allan from the ePHPX then gave an eScience take on workflows. Naturally this sector is very concered with provenance, the use of metadata and authorisation to re-use data. One problem eScience has is that each domain within it tends to invent their own domain tools and he would like to see more work done on creating webservices that could be shared. He emphasised the need to make workflows easy for users and the need for guidelines and tutorials for the creation and use of webservices. There are some example tutorials available @ http://www.grids.ac.uk/WOSE/tutorials.

Rob also highlighted the need for well defined data modesl and/or semantic tools to support data interoperability between applications linked to workflows.

Next up was Balbir Barn talking about the approach taken by the COVARM project. During the project they used a UML model based solution. They used scenarios to identify services and workflows, and these matched quite closely to BPEL process definitions.

The experience of the project has reinforced the team’s belief in the model driven activity approach. He believes that there is a need for better methodology to support eframework activities and their approach could very well fit this gap. The domain information model they produced has a structural model which can be used to identify services. The synthesis of processes can provide a mapping to BPEL. However there are some techincal issues with this approach. Although UML models are mature there are some issues within the soa context. There is a need for testing framework requirements and to be able to see the service view and overall business process view in parallel.

The final presentation of the day was from Barry Walsh and Brian McGough from the Kuali project. The Kuali Enterprise Workflow started with financial information but has now broadened out to integrate student systems, research systems and HR – areas Barry described as ‘human mediated activities’. They had to develop robust and scalable functionality whilst remaing user centric. The ability to allow people to do things ‘in their way’ was fundatmental. They have developed a generic workflow engine which supports any kind of typical process within the areas they are working in.

Unfortunately I had to leave before the discussion session but some of the key messages I got from the day were:
*there isn’t a lot of convergence around workflows – people still want to do it their own way.
* more work needs to be done defining the differences between automated workflows and human workflows
*hand off points need to be clear, and we need to be able to identify appropriate tools/services for these points

A respresentative from the Mellon foundation attend the meeting and as far as I can gather JISC and Mellon are going to continue a dialogue around funding for workflow projects.

css.php