Design bash 11 pre-event ponderings and questions

In preparation for the this year’s Design Bash, I’ve been thinking about some of the “big” questions around learning design and what we actually want to achieve on the day.

When we first ran a design bash, 4 years ago as part of the JISC Design for Learning Programme we outlined three areas of activity /interoperability that we wanted to explore:
*System interoperability – looking at how the import and export of designs between systems can be facilitated;
*Sharing of designs – ascertaining the most effective way to export and share designs between systems;
*Describing designs – discovering the most useful representations of designs or patterns and whether they can be translated into runnable versions.

And to be fair I think these are still the valid and summarise the main areas we still need more exploration and sharing – particularly the translation into runnable versions aspect.

Over the past three years, there has been lots of progress in terms of the wider context of learning design in course and curriculum design contexts (i.e. through the JISC Curriculum Design and Delivery programmes) and also in terms of how best to support practitioners engage, develop and reflect on their practice. The evolution of the pedagogic planning tools from the Design for Learning programme into the current LDSE project being a key exemplar. We’ve also seen progress each year as a directly result of discussions at previous Design bashes e.g. embedding of LAMS sequences into Cloudworks (see my summary post from last year’s event for more details).

The work of the Curriculum Design projects in looking at the bigger picture in terms of the processes involved in formal curriculum design and approval processes, is making progress in bridging the gaps between formal course descriptions and representations/manifestations in such areas as course handbooks and marketing information, and what actually happens in the at the point of delivery to students. There is a growing set of tools emerging to help provide a number of representations of the curriculum. We also have a more thorough understanding of the wider business processes involved in curriculum approval as exemplified by this diagram from the PiP team, University of Strathclyde.

PiP Business Process workflow model
PiP Business Process workflow model

Given the multiple contexts we’re dealing with, how can we make the most of the day? Well I’d like to try and move away from the complexity of the PiP diagram concentrate a bit more on the “runtime” issue ie transforming and import representations/designs into systems which then can be used by students. It still takes a lot to beat the integration of design and runtime in LAMS imho. So, I’d like to see some exploration around potential workflows around the systems represented and how far inputs and outputs from each can actually go.

Based on some of the systems I know will be represented at the event, the diagram below makes a start at trying to illustrates some workflows we could potentially explore. N.B. This is a very simplified diagram and is meant as a starting point for discussion – it is not a complete picture.

Design Bash Workflows
Design Bash Workflows

So, for example, starting from some initial face to face activities such as the workshops being so successfully developed by the Viewpoints project or the Accreditation! game from the SRC project at MMU, or the various OULDI activities, what would be the next step? Could you then transform the mostly paper based information into a set of learning outcomes using the Co-genT tool? Could the file produced there then be imported into a learning design tool such as LAMS or LDSE or Compendium LD? And/ or could the file be imported to the MUSKET tool and transformed into XCRI CAP – which could then be used for marketing purposes? Can the finished design then be imported into a or a course database and/or a runtime environment such as a VLE or LAMS?

Or alternatively, working from the starting point of a course database, e.g. SRC where they have developed has a set template for all courses; would using the learning outcomes generating properties of the Co-genT tool enable staff to populate that database with “better” learning outcomes which are meaningful to the institution, teacher and student? (See this post for more information on the Co-genT toolkit).

Or another option, what is the scope for integrating some of these tools/workflows with other “hybrid” runtime environments such as Pebblepad?

These are just a few suggestions, and hopefully we will be able to start exploring some of them in more detail on the day. In the meantime if you have any thoughts/suggestions, I’d love to hear them.

IMS LTI and LIS in action webinar, 7 July

As part of our on-going support for the current JISC DVLE programme, we’re running a webinar on Thursday 7 July at 2pm.

http://emea92334157.adobeconnect.com/r9lacqlg5ub/

The session will feature demonstrations of a number of “real world” system integrations using the IMS LTI and basic LTI and LIS specifications. These will be provided by the Stephen Vickers from the University of Edinburgh and the CeLTIc project; Steve Coppin, from the University of Essex and the EILE project and Phil Nichols from Psydev.

The webinar will run for approximately 1.5 hours, and is free to attend. More information, including a link to registration is available from the CETIS website.

Understanding, creating and using learning outcomes

How do you write learning outcomes? Do you really ensure that they are meaningful to you, to you students, to your academic board? Do you sometimes cut and paste from other courses? Are they just something that has to be done and are a bit opaque but do they job?

I suspect for most people involved in the development and teaching of courses, it’s a combination of all of the above. So, how can you ensure your learning outcomes are really engaging with all your key stakeholders?

Creating meaningful discussions around developing learning outcomes with employers was the starting point for the CogenT project (funded through the JISC Life Long Learning and Workforce Development Programme). Last week I attended a workshop where the project demonstrated the online toolkit they have developed. Initially designed to help foster meaningful and creative dialogue during co-circular course developments with employers, as the tool has developed and others have started to use it, a range of uses and possibilities have emerged.

As well as fostering creative dialogue and common understanding, the team wanted to develop a way to evidence discussions for QA purposes which showed explicit mappings between the expert employer language and academic/pedagogic language and the eventual learning outcomes used in formal course documentation.

Early versions of the toolkit started with the inclusion of number of relevant (and available) frameworks and vocabularies for level descriptors, from which the team extracted and contextualised key verbs into a list view.

List view of Cogent toolkit
List view of Cogent toolkit

(Ongoing development hopes to include the import of competencies frameworks and the use of XCRI CAP.)

Early feedback found that the list view was a bit off-putting so the developers created a cloud view.

Cloud view of CongeT toolkit
Cloud view of CongeT toolkit

and a Blooms view (based on Blooms Taxonomy).

Blooms View of CogenT toolkit
Blooms View of CogenT toolkit

By choosing verbs, the user is directed to set of recognised learning outcomes and can start to build and customize these for their own specific purpose.

CogenT learning outcomes
CogenT learning outcomes

As the tool uses standard frameworks, early user feedback started to highlight the potential for other uses for it such as: APEL; using it as part of HEAR reporting; using it with adult returners to education to help identify experience and skills; writing new learning outcomes and an almost natural progression to creating learning designs. Another really interesting use of the toolkit has been with learners. A case study at the University of Bedfordshire University has shown that students have found the toolkit very useful in helping them understand the differences and expectations of learning outcomes at different levels for example to paraphrase student feedback after using the tool ” I didn’t realise that evaluation at level 4 was different than evaluation at level 3″.

Unsurprisingly it was the learning design aspect that piqued my interest, and as the workshop progressed and we saw more examples of the toolkit in use, I could see it becoming another part of the the curriculum design tools and workflow jigsaw.

A number of the Design projects have revised curriculum documents now e.g. PALET and SRC, which clearly define the type of information needed to be inputted. The design workshops the Viewpoints project is running are proving to be very successful in getting people started on the course (re)design process (and like Co-genT use key verbs as discussion prompts).

So, for example I can see potential for course design teams after for taking part in a Viewpoints workshop then using the Co-genT tool to progress those outputs to specific learning outcomes (validated by the frameworks in the toolkit and/or ones they wanted to add) and then completing institutional documentation. I could also see toolkit being used in conjunction with a pedagogic planning tool such as Phoebe and the LDSE.

The Design projects could also play a useful role in helping to populate the toolkit with any competency or other recognised frameworks they are using. There could also be potential for using the toolkit as part of the development of XCRI to include more teaching and learning related information, by helping to identify common education fields through surfacing commonly used and recognised level descriptors and competencies and the potential development of identifiers for them.

Although JISC funding is now at an end, the team are continuing to refine and develop the tool and are looking for feedback. You can find out more from the project website. Paul Bailey has also written an excellent summary of the workshop.

Technologies update from the Curriculum Design Programme

We recently completed another round of PROD calls with the current JISC Curriculum Design projects. So, what developments are we seeing this time around?

Wordle of techs & standards used in Curriculum Design Prog, April 11
Wordle of techs & standards used in Curriculum Design Prog, April 11

Well, in terms of baseline technologies, integrations and approaches the majority of projects haven’t made any major deviations from what they originally planned. The range of technologies in use has grown slighty, mainly due to in parts to the addition of software being used for video capture (see my previous post on the use of video for capturing evidence and reflection).

The bubblegram below gives a view of the number of projects using a particular standard and/or technology.

XCRI is our front runner, with all 12 projects looking at it to a greater or lesser extent. But, we are still some way off all 12 projects actually implementing the specification. From our discussions with the projects, there isn’t really a specific reason for them not implementing XCRI, it’s more that it isn’t a priority for them at the moment. Whilst for others (SRC, Predict, Co-educate) it is firmly embedded in their processes. Some projects would like the spec to be more extensive than it stands which we have know for a while and the XCRI team are making inroads into further development particularly with its inclusion into the European MLO (Metadata for Learning Opportunities) developments. As with many education specific standards/specifications, unless there is a very big carrot (or stick) widespread adoption and uptake is sporadic however logical the argument for using the spec/standard is. On the plus side, most are confident that they could implement the spec, and we know from the XCRI mini-projects that there are no major technical difficulties in implementation.

Modelling course approval processes has been central to the programme and unsurprisingly there has been much interest and use of formal modelling languages such as BPMN and Archimate. Indeed nearly all the projects commented on how useful having models, however complex, has been to engage stakeholders at all levels within institutions. The “myth busting” power of models i.e. this shows what actually what happens and it’s not necessarily how you believe things happen, was one anecdote that made me smile and I’m sure resonates in many institutions/projects. There is also a growing use of the Archi tool for modelling and growing sharing of experience between a number of projects and the EA (Enterprise Architecture) group. As Gill has written, there are a number of parallels between EA and Curriculum Design.

Unsurprisingly for projects of this length (4 years) and perhaps heightened by “the current climate”, a number of the projects have (or are still) in the process of fairly major institutional senior staff changes. This has had some impact relating to purchasing decisions re potential institution wide systems, which are generally out of the control of the projects. There is also the issue of loss of academic champions for projects. This is generally manifesting itself in the projects by working on other areas, and lots of juggling by project managers. In this respect the programme clusters have also been effective with representatives from projects presenting to senior management teams in other institutions. Some of the more agile development processes teams have been using has also helped to allow teams to be more flexible in their approaches to development work.

One very practical development which is starting to emerge from work on rationalizing course databases is the automatic creation of course instances in VLEs. A common issue in many institutions is that there are no version controls for course within VLEs and it’s very common for staff to just create a new instance of a course every year and not delete older instances which apart from anything else can add up to quite a bit of server space. Projects such as SRC are now at the stage where there new (and approved) course templates are populating the course database which then triggers an automatic creation of a course in the VLE. Predict, and UG-Flex have similar systems. The UG-Flex team have also done some additional integration with their admissions systems so that students can only register for courses which are actually running during their enrollment dates.

Sharepoint is continuing to show a presence. Again there are a number of different approaches to using it. For example in the T-Spark project, their major work flow developments will be facilitated through Sharepoint. They now have a part time Sharepoint developer in place who is working with the team and central IT support. You can find out more at their development blog. Sharepoint also plays a significant role in the PiP project, however the team are also looking at integrations with “bigger” systems such as Oracle, and are developing a number of UI interfaces and forms which integrate with Sharepoint (and potentially Oracle). As most institutions in the UK have some flavour of Sharepoint deployed, there is significant interest in approaches to utilising it most effectively. There are some justifiable concerns relating to its use for document and data management, the later being seen as not one of its strengths.

As ever it is difficult to give a concise and comprehensive view from such a complex set of projects, who are all taking a slightly different approach to their use of technology and the methods they use for system integration. However many projects have said that the umbrella of course design has allowed them to discuss, develop the use of institutional administration and teaching and learning systems far more effectively than they have been able to previously. A growing number of resources from the projects is available from The Design Studio and you can view all the information we have gathered from the projects from our PROD database.

What's in a name?

Names, they’re funny things aren’t they? Particularly project ones. I’ve never really been great at coming up with project names, or clever acronyms. However remembering what acronyms stand for is almost a prerequisite for anyone who works for CETIS, and has anything to do with JISC :-). The issue of meaningful names/acronyms came up yesterday at the session the CCLiP project ran at the Festival of The Assemblies meeting in Oxford.

Working with 11 partners from the education and cultural sectors, the CCLiP project has been developing a CPD portal using XCRI as a common data standard. The experiences of working with such a cross sector of organisations has led members of the team to be involved in a benefits realisation project, the BR XCRI Knowledge Base. This project is investigating ways to for want of a better word, sell, the benefits of using XCRI. However, one of the major challenges is actually explaining what XCRI is to key (more often than not, non-technical) staff. Of course the obvious answer to some, is that it stands for eXchanging Course Related Information and that pretty much sums it up. But it’s not exactly something that naturally rolls of the tongue and encapsulates its potential uses is it? So, in terms of wider benefits realisation how do you explain the potential of XCRI and encourage wide adoption?

Of course, this is far a from unique problem, particularly in the standards world. They tend not have the most of exciting of names, and of course a lot of actual end users never need to know what they’re called either. However, at this stage in the XCRI life-cycle, there is a need for explanation for both the technical and non-technically minded. And of course that is happening with case-studies etc being developed.

During a lively and good natured discussion participants in the session discussed the possibility of changing the name from XCRI to “opportunity knocks” as way to encapsulate the potential benefits that had been demonstrated to us by the CCLiP team, and create a bit of curiosity and interest. I’m not sure if that would get a very positive clappometer response from certain circles, but I’d be interested in any thoughts you may have.

2nd Linked Data Meetup London

Co-located with the dev8D, the JISC Developer Days event, this week, I along with about 150 others gathered at UCL for a the 2nd Linked Data Meetup London.

Over the past year or so the concept and use of linked data seems to be gaining more and more traction. At CETIS we’ve been skirting around the edges of semantic technologies for some time – tying to explore realization of the vision particularly for the teaching and learning community. Most recently with our semantic technologies working group. Lorna’s blog post from the last meeting of the group summarized some potential activity areas we could be involved in.

The day started with a short presentation from Tom Heath, Talis, who set the scene by giving an overview of the linked data view of the web. He described it as a move away from the document centric view to a more exploratory one – the web of things. These “things” are commonly described, identified and shared. He outlined 10 task with potential for linked data and put forward a case for how linked data could enhance each one. E.g. locating – just now we can find a place, say Aberdeen, however using linked data allows us to begin to disambiguate the concept of Aberdeen for our own context(s). Also sharing content, with a linked data approach, we just need to be able to share and link to (persistent) identifiers and not worry about how we can move content around. According to Tom, the document centric metaphor of the web hides information in documents and limits our imagination in terms of what we could do/how we could use that information.

The next presentation was from Tom Scott, BBC who illustrated some key linked data concepts being exploited by the BBC’s Wildlife Finder website. The site allows people to make their own “wildlife journeys”, by allowing them to explore the natural world in their own context. It also allows the BBC to, in the nicest possible way, “pimp” their own progamme archives. Almost all the data on the site, comes from other sources either on the BBC or the wider web (e.g. WWF, Wikipedia). As well as using wikipedia their editorial team are feeding back into the wikipedia knowledge base – a virtuous circle of information sharing. Which worked well in this instance and subject area, but I have a feeling that it might not always be the case. I know I’ve had my run-ins with wikipedia editors over content.

They have used DBPedia as a controlled vocabulary. However as it only provides identifiers, and no structure they have built their own graph to link content and concepts together. There should be RDF available from their site now – it was going live yesterday. Their ontology is available online.

Next we had John Sheridan and Jeni Tennison from data.gov.uk. They very aptly conceptualised their presentation around a wild-west pioneer theme. They took us through how they are staking their claim, laying tracks for others to follow and outlined the civil wars they don’t want to fight. As they pointed out we’re all pioneers in this area and at early stages of development/deployment.

The data.gov.org project wants to:
* to develop social capital and improve delivery of public service
*make progress and leave legacy for the future
*use open standards
*look at approaches to publishing data in a distributed way

Like most people (and from my perspective, the teaching and learning community in particular) they are looking for, to continue with the western theme, the “Winchester ’73” for linked data. Just now they are investigating creating (simple) design patterns for linked data publishing to see what can be easily reproduced. I really liked their “brutally pragmatic and practical” approach. Particularly in terms of developing simple patterns which can be re-tooled in order to allow the “rich seams” of government data to be used e.g. tools to create linked data from Excel. Provenance and trust is recognised as being critical and they are working with the W3C provenance group. Jeni also pointed that data needs to be easy to query and process – we all neglect usability of data at our peril. There was quite a bit of discussion about trust and John emphasised that the data.gov.uk initiative was about public and not personal data.

Lin Clark then gave an overview of the RDF capabilities of the Drupal content managment system. For example it has default RDF settings and FOAF capability built in. The latest version now has an RDF mapping user interface which can be set up to offer up SPARQL end points. A nice example of the “out of the box” functionality which is needed for general uptake of linked data principles.

The morning finished with a panel session where some of key issues raised through the morning presentations were discussed in a bit more depth. In terms of technical barriers, Ian Davies (CEO, Talis) said that there needs to be a mind shift for application development from one centralised database to one where multiple apps access multiple data stores. But as Tom Scott pointed out it if if you start with things people care about and create URIs for them, then a linked approach is much more intuitive, it is “insanely easy to convert HTML into RDF “. It was generally agreed that the identifying of real world “things”, modelling and linking of data was the really hard bit. After that, publishing is relatively straightforward.

The afternoon consisted of a number of themed workshops which were mainly discussions around the issues people are grappling with just now. I think for me the human/cultural issues are crucial, particularly provenance and trust. If linked data is to gain more traction in any kind of organisation, we need to foster a “good data in, good data out” philosophy and move away from the fear of exposing data. We also need to ensure that people understand that taking a linked data approach doesn’t automatically presume that you are going to make that data available outwith your organisation. It can help with internal information sharing/knowledge building too. Of course what we need are more killer examples or winchester 73s. Hopefully over the past couple of days at Dev8 progress will have been made towards those killer apps or at least some lethal bullets.

The meet up was a great opportunity to share experiences with people from a range of sectors about their ideas and approaches to linked data. My colleague Wilbert Kraan has also blogged about his experiments with some of our data about JISC funded projects.

For an overview of the current situation in UK HE, it was timely that Paul Miller’s Linked Data Horizon Scan for JISC was published on Wednesday too.

Some thoughts on the IMS Quarterly meeting

I’ve spent most this week at the IMS Quarterly meeting in Birmingham and thought I’d share a few initial reflections. In contrast to most quarterly meetings this was an open event which had its benefits but some drawbacks (imho) too.

On the up side it was great to see so many people at an IMS meeting. I hadn’t attended a quarterly meeting for over a year so it was great to see old faces, but heartening to see so many new ones too. There did seem to be a real sense of momentum – particularly with regards to the Common Cartridge specification. The real drive for this seems to be coming from the K-12 CC working group who are making demands to extend the profile of the spec from its very limited initial version. They are pushing for major extensions to the QTI profile (it is limited to six question types at the moment) to be included, and are also looking to Learning Design as way to provide curriculum mapping and lesson planning to cartridges.

The schools sector on the whole do seem to be more pragmatic and more focused than our rather more (dare I say self-indulgent) HE mainly research focused community. There also seems to be concurrent rapid development (in context of spec development timescales) in the Tools Interoperability spec with Dr Chuck and his team’s developments in “simple TI” (you can watch the video here)

On the down side, the advertised plugfest was in reality more of a “presentationfest”, which although interesting in parts wasn’t really what I had expected. I was hoping to see more live demos and interoperability testing.

Thursday was billed as a “Summit on Interoperability: Now and Next”. Maybe it was just because I was presentation weary by that point, but I think we missed a bit of an opportunity to have more discussion – particularly in the first half of the day.

It’s nigh on impossible to explain the complexity of the Learning Design specification in half hour slots -as Dai Griffiths pointed out in his elevator pitch “Learning Design is a complex artefact”. Although the Dai and Paul Sharpels from the ReCourse team did a valiant job, as did Fabrizio Giongine from Guinti Labs with his Prolix LD demo; I can’t help thinking that what the community, and in turn perhaps what IMS should be concentrating on is developing a new, robust set of use cases for the specification. Having some really tangible designs rooted in really practice would (imho) make the demoing of tools much more accessible as would starting demos from the point of view of the actual “runnable” view of the design instead of the (complex) editor view. Hopefully some of the resources from the JISC D4L programme can provide some starting points for that.

The strap line for Common Cartridge is “freeing the content” and in the afternoon the demos from David Davies (University of Warwick ) on the use of repositories and RSS in teaching followed by Scott Wilson and Sarah Currier demoing some applications of the SWORD specification for publishing resources in Intralibrary through the Feedforward tool illustrated exactly that. David gave a similar presentation at a SIG meeting last year, and I continue to be impressed by the work David and his colleagues are doing using RSS. SWORD also continues to impresses with every implementation I see.

I hope that IMS are able to build on the new contacts and offers of contributions and collaborations that arose over the week, and that they organise some more open meetings in the future. Of course the real highlight of the week was learning to uʍop ǝpısdn ǝʇıɹʍ🙂

Opening up the IMS

Via Stephen Downes OL Daily I came across this post by Michael Feldstein about his recent experiences in IMS and around the contradiction of IMS being a subscription organisation producing so called open standards. This issue has been highlighted over the last 2 years or so with the changes in access to to public versions of specs.

Michael puts forward three proposals to help IMS in becoming more open:

    “Eliminate altogether the distinction between the members-only CM/DN draft and the one available to the general public. IMS members who want an early-adopter advantage should join the working groups.”

    Create a clear policy that individual working groups are free to release public general updates and solicit public input on specific issues prior to release of the public draft as they see fit.

    Begin a conversation with the IMS membership about the possibility of opening up the working group discussion areas and document libraries to the general public on a read-only basis.”

Getting sustained involvement in any kind of specification process is very difficult. I know I wouldn’t have much to do with IMS unless I was paid to do it 🙂 Thankfully here in the UK JISC has recognised that have an organisation like CETIS can have an impact on standards development and uptake. But the world is changing particularly around the means and access to educational content. Who needs standards compliant content when you can just rip and mix off the web as the edupunkers have been showing us over the last few weeks. I don’t think they are at all “bovvered” about needing for example to convert their videos to Common Cartridges when they can just stick them onto Youtube.

Here at CETIS we have been working closely with IMS to allow JISC projects access to specifications but the suggestions Michael makes would certainly help broaden out the reach of the organisation and hopefully help provide the development of useful, relevant (international) standards.

IMS Announces Pilot Project Exploring Creative Commons Licensing of Interoperability Specifications

IMS (GLC)have just announced announced plans to initiate a pilot project in the distribution of interoperability specifications under a form of Creative Commons license. According to the press release “IMS GLC has conceptualized a novel approach that may be applicable to many standards organizations.”

I’m not exactly sure just what these novel approaches are, and even less so about how they would actually work. But not doubt we will be hearing more in coming months. Any moves towards more openness of the standards agenda can only be a move in the right direction.

Assessment, Packaging – where, why and what is going on?

Steve Lay (CARET, University of Cambridge) hosted the joint Assessment and EC SIG meeting at the University of Cambridge last week. The day provided and opportunity to get an update on what is happening in the specification world, particularly in the content packaging and assessment areas and compare that to some really world implementations including a key interest – IMS Common Cartridge.

Packaging and QTI are intrinsically linked – to share and move questions/items they need to be packaged – preferably in an interoperable format:-) However despite recent developments in both the IMS QTI and CP specifications, due to changes in the structure of IMS working groups there have been no public releases of either specifications for well over a year. This is mainly due to the need for at least two working implementations of a specification before public release. In terms of interoperability, general uptake and usabillity this does seem like a perfectly sensible change. But as ever, life is never quite that simple.

IMS Common Cartridge has come along and has turned into something of a flag-bearer for IMS. This has meant that an awful lot of effort from some of the ‘big’ (or perhaps ‘active’ would be more accurate) members of IMS has been concentrated on the development of CC and not pushing implementation of CP1.2 or the latest version of QTI. A decision was taken early in the development of CC to use older, more widely implemented versions of specifications rather than the latest versions. (It should be noted that this looks like changing as more demands are being made on CC which the newer versions of the specs can achieve.)

So, the day was also an opportunity to reflect on what the current state of play is with IMS and other specification bodies, and to discuss with the community what areas they feel are most important for CETIS to be engaging in. Profiling did surface as something that the JISC elearning development community – particularly in the assessment domain – should be developing further.

In terms of specification updates, our host Steve Lay presented a brief history of QTI and future development plans, Adam Cooper (CETIS) gave a round up from the IMS Quarterly meeting held the week before and Wilbert Kraan (CETIS) gave a round up of packaging developments including non IMS initiatives such as OAI-ORE and IEEE RAMLET. On the implementation side of things Ross MacKenzie and Sarah Wood (OU) took us through their experiences of developing common cartridges for the OpenLearn project and Niall Barr (NB Software) gave an overview of integrating QTI and common cartridge. There was also a very stimulating presentation from Linn van der Zanden (SQA) on a pilot project using wikis and blogs as assessment tools.

Presentations/slidecasts ( including as much discussion as was audible) and MP3s are available from the wiki so if you want to get up to speed on what is happening in the wonderful world of specifications – have a listen. There is also an excellent review of the day over on Rowin’s blog.

css.php