Design bash 11 pre-event ponderings and questions

In preparation for the this year’s Design Bash, I’ve been thinking about some of the “big” questions around learning design and what we actually want to achieve on the day.

When we first ran a design bash, 4 years ago as part of the JISC Design for Learning Programme we outlined three areas of activity /interoperability that we wanted to explore:
*System interoperability – looking at how the import and export of designs between systems can be facilitated;
*Sharing of designs – ascertaining the most effective way to export and share designs between systems;
*Describing designs – discovering the most useful representations of designs or patterns and whether they can be translated into runnable versions.

And to be fair I think these are still the valid and summarise the main areas we still need more exploration and sharing – particularly the translation into runnable versions aspect.

Over the past three years, there has been lots of progress in terms of the wider context of learning design in course and curriculum design contexts (i.e. through the JISC Curriculum Design and Delivery programmes) and also in terms of how best to support practitioners engage, develop and reflect on their practice. The evolution of the pedagogic planning tools from the Design for Learning programme into the current LDSE project being a key exemplar. We’ve also seen progress each year as a directly result of discussions at previous Design bashes e.g. embedding of LAMS sequences into Cloudworks (see my summary post from last year’s event for more details).

The work of the Curriculum Design projects in looking at the bigger picture in terms of the processes involved in formal curriculum design and approval processes, is making progress in bridging the gaps between formal course descriptions and representations/manifestations in such areas as course handbooks and marketing information, and what actually happens in the at the point of delivery to students. There is a growing set of tools emerging to help provide a number of representations of the curriculum. We also have a more thorough understanding of the wider business processes involved in curriculum approval as exemplified by this diagram from the PiP team, University of Strathclyde.

PiP Business Process workflow model
PiP Business Process workflow model

Given the multiple contexts we’re dealing with, how can we make the most of the day? Well I’d like to try and move away from the complexity of the PiP diagram concentrate a bit more on the “runtime” issue ie transforming and import representations/designs into systems which then can be used by students. It still takes a lot to beat the integration of design and runtime in LAMS imho. So, I’d like to see some exploration around potential workflows around the systems represented and how far inputs and outputs from each can actually go.

Based on some of the systems I know will be represented at the event, the diagram below makes a start at trying to illustrates some workflows we could potentially explore. N.B. This is a very simplified diagram and is meant as a starting point for discussion – it is not a complete picture.

Design Bash Workflows
Design Bash Workflows

So, for example, starting from some initial face to face activities such as the workshops being so successfully developed by the Viewpoints project or the Accreditation! game from the SRC project at MMU, or the various OULDI activities, what would be the next step? Could you then transform the mostly paper based information into a set of learning outcomes using the Co-genT tool? Could the file produced there then be imported into a learning design tool such as LAMS or LDSE or Compendium LD? And/ or could the file be imported to the MUSKET tool and transformed into XCRI CAP – which could then be used for marketing purposes? Can the finished design then be imported into a or a course database and/or a runtime environment such as a VLE or LAMS?

Or alternatively, working from the starting point of a course database, e.g. SRC where they have developed has a set template for all courses; would using the learning outcomes generating properties of the Co-genT tool enable staff to populate that database with “better” learning outcomes which are meaningful to the institution, teacher and student? (See this post for more information on the Co-genT toolkit).

Or another option, what is the scope for integrating some of these tools/workflows with other “hybrid” runtime environments such as Pebblepad?

These are just a few suggestions, and hopefully we will be able to start exploring some of them in more detail on the day. In the meantime if you have any thoughts/suggestions, I’d love to hear them.

Understanding, creating and using learning outcomes

How do you write learning outcomes? Do you really ensure that they are meaningful to you, to you students, to your academic board? Do you sometimes cut and paste from other courses? Are they just something that has to be done and are a bit opaque but do they job?

I suspect for most people involved in the development and teaching of courses, it’s a combination of all of the above. So, how can you ensure your learning outcomes are really engaging with all your key stakeholders?

Creating meaningful discussions around developing learning outcomes with employers was the starting point for the CogenT project (funded through the JISC Life Long Learning and Workforce Development Programme). Last week I attended a workshop where the project demonstrated the online toolkit they have developed. Initially designed to help foster meaningful and creative dialogue during co-circular course developments with employers, as the tool has developed and others have started to use it, a range of uses and possibilities have emerged.

As well as fostering creative dialogue and common understanding, the team wanted to develop a way to evidence discussions for QA purposes which showed explicit mappings between the expert employer language and academic/pedagogic language and the eventual learning outcomes used in formal course documentation.

Early versions of the toolkit started with the inclusion of number of relevant (and available) frameworks and vocabularies for level descriptors, from which the team extracted and contextualised key verbs into a list view.

List view of Cogent toolkit
List view of Cogent toolkit

(Ongoing development hopes to include the import of competencies frameworks and the use of XCRI CAP.)

Early feedback found that the list view was a bit off-putting so the developers created a cloud view.

Cloud view of CongeT toolkit
Cloud view of CongeT toolkit

and a Blooms view (based on Blooms Taxonomy).

Blooms View of CogenT toolkit
Blooms View of CogenT toolkit

By choosing verbs, the user is directed to set of recognised learning outcomes and can start to build and customize these for their own specific purpose.

CogenT learning outcomes
CogenT learning outcomes

As the tool uses standard frameworks, early user feedback started to highlight the potential for other uses for it such as: APEL; using it as part of HEAR reporting; using it with adult returners to education to help identify experience and skills; writing new learning outcomes and an almost natural progression to creating learning designs. Another really interesting use of the toolkit has been with learners. A case study at the University of Bedfordshire University has shown that students have found the toolkit very useful in helping them understand the differences and expectations of learning outcomes at different levels for example to paraphrase student feedback after using the tool ” I didn’t realise that evaluation at level 4 was different than evaluation at level 3″.

Unsurprisingly it was the learning design aspect that piqued my interest, and as the workshop progressed and we saw more examples of the toolkit in use, I could see it becoming another part of the the curriculum design tools and workflow jigsaw.

A number of the Design projects have revised curriculum documents now e.g. PALET and SRC, which clearly define the type of information needed to be inputted. The design workshops the Viewpoints project is running are proving to be very successful in getting people started on the course (re)design process (and like Co-genT use key verbs as discussion prompts).

So, for example I can see potential for course design teams after for taking part in a Viewpoints workshop then using the Co-genT tool to progress those outputs to specific learning outcomes (validated by the frameworks in the toolkit and/or ones they wanted to add) and then completing institutional documentation. I could also see toolkit being used in conjunction with a pedagogic planning tool such as Phoebe and the LDSE.

The Design projects could also play a useful role in helping to populate the toolkit with any competency or other recognised frameworks they are using. There could also be potential for using the toolkit as part of the development of XCRI to include more teaching and learning related information, by helping to identify common education fields through surfacing commonly used and recognised level descriptors and competencies and the potential development of identifiers for them.

Although JISC funding is now at an end, the team are continuing to refine and develop the tool and are looking for feedback. You can find out more from the project website. Paul Bailey has also written an excellent summary of the workshop.

JISC Assembly: Realising Co-generaTive Benefits twitter story

I recently attended a workshop for the Cogent Project. I will be writing more about this but in the meantime I’ve pulled together a narrative from twitter which gives a good overview of the day.

[View the story “JISC Assembly: Realising Co-generaTive Benefits ” on Storify]

Transforming curriculum delivery through technology: New JISC guide and radio show launched

A new JISC guide ” Transforming curriculum delivery through technology: Stories of challenge, benefit and change” has been launched today.

a mini-guide to the outcomes of the JISC Transforming Curriculum Delivery Through Technology programme, summarises the headline benefits of technology in curriculum delivery made evident by the work of the 15 projects in the programme The outcomes of these projects provide a rich insight into the ways in which institutions and individual curriculum areas can make use of technology to respond more robustly to the demands of a changing world.”

You can access PDF and text only versions of the guide, or order a print copy by following this link

The latest installment of the JISC on Air series, Efficiences, enhancements and transformation: how technology can deliver includes interviews with two projects involved in the programme, (Making the New Diploma a Success and eBioLabs) discussing the impact achieved in two very different contexts and disciplines.

If the mini-guide whets your appetite for more information about the programme, the Programme Synthesis report provides more in-depth analysis of the lessons learned, and further information and access to project outputs is available from Design Studio.

What technologies have been used to transform curriculum delivery?

The Transforming Curriculum Delivery through Technology (aka Curriculum Delivery) Programme is now finished. Over the past two years, the 15 funded projects have all been on quite a journey and have between them explored the use of an array of technologies (over 60) from excel to skype to moodle to google wave.

The bubblegram and treegraph below give a couple of different visual overviews of the range technologies used.

As has been reported before, there’s not been anything particularly revolutionary or cutting edge about the technologies being used. The programme did not mandate any particular standards or technical approaches. Rather, the projects have concentrated on staff and student engagement with technology. Which of course is the key to having real impact in teaching and learning. The technologies themselves can’t do it alone.

The sheer numbers of technologies being used does, I think, show an increasing confidence and flexibility not only from staff and students but also in developing institutional systems. People are no longer looking for the magic out of the box solution and are more willing to develop their own integrations based on their real needs. The ubiquity of the VLE does come through loud and clear.

There are still some key lessons coming through.

* Simple is best – don’t try and get staff (and students) to use too many new things at once.
* Have support in place for users – if you are trying something new, make sure you have the appropriate levels of support in place for users.
*Tell people what you are doing – talk about your project, wherever you can and share your objectives as widely as possible. Show people the benefits of what you are doing. Encourage others to share too.
*Talk to institutional IT support teams about what you are planning – before trying to use a new piece of software, make sure it does work within your institutional network. IT teams can provide invaluable information and advice about will/won’t work. They can also provide insights into scalability issues for future developments. A number of the projects have found that although web 2.0 technologies can be implemented relatively quickly, there are issues when trying to increase the scale of trial projects.

A full record of the technologies in use for the projects is available from our PROD project database. More information on the projects and a selection of very useful shareable outputs (including case studies and resources) is available from the Design Studio.

Thoughts so far on LAK11

Along with about 400 or so others world-wide, I’ve signed up for the LAK11 (Learning and Knowledge Analytics) MOOC run by George Siemens and colleagues at the Technology Enhanced Knowledge Research Institute (TEKRI) at Athabasca University. We’re now into week 2, and I think I’m just about getting into the swing of things.

When George was in the UK late last year, I managed to catch his presentation at Glasgow Caledonian, and I was intrigued with the concept of learning analytics, and in particular how we can start to use data in meaningful ways for teaching and learning. I wanted to know more about what learning analytics are and so signed up for the course. I’ve also been intrigued by the concept of MOOCs so this seemed liked the ideal opportunity to try one out for myself.

In her overview paper, Tanya Elias provides a useful description: ” Learning analytics is an emerging field in which sophisticated analytic tools are used to improve learning and education. It draws from, and is closely tied to, a series of other fields of study including business intelligence, web analytics, academic analytics, educational data mining, and action analytics.” (Elias, T. (2011) Learning Analytics: Definitions, Processes, Potential)

The course outcomes are:
*Define learning and knowledge analytics
*Map the developments of technologies and practices that influence learning and knowledge analytics as well as developments and trends peripheral to the field.
*Evaluate prominent analytics methods and tools and determine appropriate contexts where the methods would be most effective.
*Describe how “big data” and data-driven decision making differ from traditional decision making and the potential future implications of this transition.
*Design a learning analytics implementation plan at a course level. 
*Evaluate the potential impact of the semantic web and linked data on learning resources and curriculum.
*Detail various elements organizational leaders need to consider to roll out an integrated knowledge and learning analytics model in an organizational setting.
*Describe and evaluate developing trends in learning and knowledge analytics and develop models for their potential impact on teaching, learning, and organizational knowledge

You can check out the full course syllablus here .

The fact that the course is open and non-accredited really appealed to me as, to be honest, I am a bit lazy and not sure if I wanted to commit to to a formal course. The mix of online resources, use of tags, aggregation etc fits right in with my working practices. I blog, I tweet, I’m always picking up bits of useful (and useless) information from my streams – so having a bit of focus for some activity sounded perfect – I’m a self motivated kind of a person aren’t I?

But it’s never that simple is it? Old habits die hard – particularly that nagging feeling of guilt about signing up for a course and not reading all the suggested texts, reading all the forum messages, doing all the suggested activities. Is it just me that suffers from the tensions of trying to be an engaged, self motivated learner and everyday distractions and procrastination? I’ve had some vey circular discussion about myself about why I’m not actually looking at the course material at times.

However, George and the team have been particularly good at reassuring people and emphasising that we need to “let go of traditional boundaries”. With a cohort this large it’s pretty near impossible to keep up with everything so they actively encourage people only to do what they can, and concentrate on what what really interests you. They actively encourage “skim and dive” techniques -skim the all the resources and dive into what catches your eye/interest. If you’ve being thinking about doing one of the MOOCs then I would recommend having a listen to the introductory elluminate session (another great thing about open courses is that all the resources are available to everyone, anytime).

I’ve found the eliminate sessions the most interesting so far. Not because the other resources provided aren’t as engaging – far from it. I think it’s more to do with the synchronous element and actually feeling part of a community. All the speakers so far have been very engaging, as has the chat from participants.

Last week as introduction to Learning Analytics, John Fritz, UMBC gave an overview of the work he’s leading in trying to help students improve performance by giving them access to data about their online activity. They built a BlackBoard building block called Check My Activity (CMA), you can read more about it here. John and colleagues are also now active in trying to use data from their LMS to help teachers design more effective online actives.

This week’s topic is “The Rise of Big Data” and on Tuesday, Ryan Baker from Worcester Polytechnic Institute was in the eliminate hot seat, giving us an introduction to Educational Data Mining (EDM). EDM draws heavily on data mining methodologies, but in the context of educational data. Ryan explained it as a distillation of data for human judgement. In other words making complex data understandable and useful for non information scientists. EDM and Learning Analytics are both growing research areas, and the there are a number of parallels between them. We did have quite a bit of discussion about what the differences were exactly, which boiled down to the fact that both are concerned with the same deep issues, but learning analytics is maybe broader in scope and using more qualitative approaches to data and not so dedicated to data mining methodology as EDM. Ryan gave an overview of the work he has been doing around behaviour modelling from data generated by some of Carnegie Mellon Cognitive Tutor programmes, and how they are using the data to redesign actives to reduce for example students going “off task”. Again you can access the talk from the course moodle site.

Next week I’m hoping to be doing a bit more diving as the topic is Sematinc Web, Linked Data and Intelligent Curriculum. Despite the promise, there really isn’t that much evidence of linked data approaches being used in teaching and learning contexts as we found with the JISC funded SemTech report and more recently when Lorna Campbell and I produced our briefing paper on The Semantic Web, Linked and Open Data. I think that there are many opportunities for using linked data approaches. The Dynamic Learning Maps project at the University of Newcastle is probably the best example I can think of. However, linking data within many institutions is a key problem. The data is invariable not in a standard form, and even when it is there’s a fair bit of house keeping to be done. So finding linkable data to use is still a key challenge and I’m looking forward to finding out what others are doing in this area.

Cooking up networks, community and learning environments

I spent the early part of last week at OpenEdTech 2010 in Barcelona. Organised by Eva de Lera from the UOC, this 2 day gathering with 30 international participants, was a truly engaging and thought provoking experience.

Over the years I have seen the cookery book/recipe metaphor used for various purposes. However, Open Tech took it a stage further by actually having us take part in a cooking lesson. Not being the best chef in the world, I was slightly apprehensive beforehand. However, it turns cooking is a great way to create a build a sense of community, breakdown barriers and allow for free, frank and quite often totally unexpected conversations to happen. It’s amazing how much I learned about the Sakai implementation at UC Berkley whilst chopping onions 🙂

The theme of the conference was “rethinking the online campus life of the 21st century”. We were challenged to come up with 15 recommendations that could be implemented next week to improve online life for students. The conference blog has a great summary of the activities. As ever, being taken out of one’s environment gives a chance to reflect and share on some of the great work that is being done here in the UK. And it was heartening to see how much interest there was in a range of work that is being funded by JISC including DVLE, Curriculum Delivery and Design and OER. It was also encouraging to see so many people highlighting the need for more open, flexible architectures, which support personalization and integration of formal and informal networks, content and structures.

There was also great positive spirit in the group, which in the current climate is increasingly hard to foster. We had many challenging discussions, but they never slipped into the negative “that’ll never work where I’m from” type. Everyone really wanted to share experiences, processes, content with everyone else. So thanks to Eva and the NMC team for organising and facilitating such a great event. A full report will be produced over the coming months, but you can read/see more at the conference website.

css.php