Archive for the 'design' Category

A massive slice of pi

Thursday, March 28th, 2013

On the March 14th at 1.59pm (3.14159 in geek speak) we ran Pi Day Live, a free online event ‘rediscovering’ the famous number hosted by Professor Marcus du Sautoy. During the event participants were invited to use one of our Find Pi methods to derive pi and then upload it to our website as part of  a crowd sourcing experiment. We had around one thousand participants pre-register for the ‘Online Lecture Theatre’ (Blackboard Collaborate run by JISC Netskills (Massive thanks to them for providing such a professional and friendly service) from 17 different countries. About 800 of these were schools who signed up as classes via teachers. The pupils ranged from 11 to 18 years old. We also had circa 1500 participants who simply turned-up on the day and got involved via our YouTube ‘Big Screen’ (Google Hangouts on Air).

You can watch a recording of the event and run your own Find Pi activity here:

Pi Day Live was a pilot event for an engagement format I designed called Oxford Connect. The thinking behind Oxford Connect is to create a conversational and involving way to engage with ‘concepts, ideas and research’ from the University of Oxford. This is a Public Engagement approach but it also has potential for Widening Participation and the Impact agenda. The emphasis is very much on the live aspect of the event i.e. what differentiates this from a pre-recorded video, what would motivate participants to get involved at a particular moment in time? In the case of Pi Day Live we did everything we could to make it worthwhile engaging live. There was the opportunity for discussion, for your questions about pi to be answered and of course the Find Pi activities with associated crowd sourcing. In essence the event had all the technicalities of a live television broadcast coupled with the complexities of an online discussion and social media with some crowd sourcing thrown in for good measure.

We threw everything at Pi Day Live to see what worked:

1. The live event
I can’t overstate how compelling delivering a live event was. From the moment we received Tweets showing our live feed on screens in classrooms there was a real feeling that everyone participating was involved in something unique. Marcus started by giving a few shout-outs to some of the schools and individuals who had pre-registered. After the event Marcus discovered that he had many requests for shout-outs from schools via Twitter. I wasn’t expecting Twitter to be such a live channel in this case. Reporting on the changing crowd sourced value of pi was also a compelling aspect of being live.

2. The Find Pi activities
These appeared to be popular and as far as we can tell focused people’s minds during the middle part of the event. We currently have circa 300 results and a value of pi at 3.104. Our expectation was that we would have a few hundred people hitting the Oxford Connect site during the event. On the day we got well over 2000 which choked our server and cut down the number of people who could submit results.

Can you see when our server got so busy it couldn't even send data to our logging software? :)

Can you see when our server got so busy it couldn’t even send data to our logging software? 🙂

3. The discussion – responding to questions
This aspect of the event went less well, we didn’t receive many questions and the discussions in Blackboard Collaborate were relatively quiet. I think we threw too much at the participants who were happy to watch, Tweet or get on with the Find Pi activities. We also split people’s attention by leaving Marcus on screen commentating on his own Buffon’s Needle experiment during the Find Pi section of the event. I suspect that some people had gone into sit-back-and-watch mode which we need to balance with the interactive elements. We had provided more than one mode of engagement in parallel which isn’t ideal but was a side effect of our piloting approach.

My View of Pi Day Live

My View of Pi Day Live

Reflecting on the event I’m thinking there are probably discursive and an activity focused versions of Oxford Connect events. It’s also clear that Twitter or something as ‘light touch’ as Twitter can be enough of a ‘conversational’ channel to sustain live engagement when everyone is also running their own experiments and uploading results. I’m hoping that we can run similar event for other departments here at Oxford in the future. We are also talking about using the live format as an anchor for a quasi-open online version of our department’s face-to-face day schools. Overall I’m pleased with how the pilot ran, we learnt a lot and the technology held up well. We got plenty of positive feedback and some people disappointed they couldn’t get to the Oxford Connect webpage as our server tried to keep-up. The complexities of going out live were more than outweighed by the buzz and sense of connection that came with it. I’m confident that we can run a more streamlined version of Pi Day Live for other disciplines which is less risky while increasing the level of engagement for those who get involved live. Success in terms of ‘massive’ numbers is a dangerous thing though, especially for live events – we are going to need a bigger server…

The Learning Design Support Environment and Curriculum Design

Tuesday, March 29th, 2011

I am doing a presentation on the LDSE for the JISC curriculum design strand which is also open to others if they are interested.  So if you are, here are the details and how to sign up.

The Learning Design Support Environment (LDSE) Project is working with practising teachers to research, and co-construct, an interactive Learning Design Support Environment (LDSE) to scaffold teachers’ decision-making from basic planning to creative TEL design.  The LDSE captures and represents a user’s learning design (at module and session level), structuring the user input so that it is amenable to analysis (in terms of learning experience and teaching time), and can also be adopted and adapted by others. Key areas the LDSE is investigating include:

  • Forms of representation of learning designs
  • An ontology for learning design
  • Designing at Module and Session levels
  • Importing and adapting an existing design
  • Selecting from existing teaching-learning activities
  • Editing the properties of TLAs
  • Extensive advice and guidance
  • Analysis of teaching costs and learning benefits
  • Sharing specific and generic patterns
  • Exporting a design to an institutional format

This session will provide a tour of the latest version of the LDSE highlighting the features italicised above, and allow time for discussion around the many areas where the interest of the LDSE and the Curriculum design projects  align.  In particular:

1.       How we model principles in educational design – What important principles do you use to support the learning design process?

2.       Guidelines and toolkits for staff – Could the LDSE tools support or work alongside tools being developed by projects?

3.       Joining up systems – Can our inputs and outputs work together? How do we join up institution-level business processes with learning-level design?

4.       Taking things forward – How can LDSE support and inform the work of the CDD programme? And vice versa?


Further information about the LDSE project:

Recording now available at:

Activity level design and learning design tools

Tuesday, September 8th, 2009

One of the challenges in working in the learning design/pedagogy planning tools area  is that the most practitioners we encounter don’t want planning tools, they want content creation tools that work seamlessly with their delivery environment.  Or they say they want planning tools, but when you clarify their requirements they want is really all around content creation.

Liz Masterman and I were discussing representations of activity level design, when we had one of those realisations that make you wonder why you have never seen it before – and suspect that perhaps it was obvious to everyone but you – that at the activity level, design is most often done within the delivery tool.  I may plan a face to face teaching session in Phoebe or (getting back to basics) Word, but usually I work out the details of the specific activities of a face to face training session in PowerPoint as that is what I use to present it to the students in class.  With online courses again I am far more likely to start writing straight into the wiki itself when working out how I want a wiki based activity to work and what instructions I need to give students around it.

I would be interested if others would agree with this?  If it is not just me, then for projects such as Cascade and LDSE  this has implications for where it is best situate guidance and support, where planning and support tools have a role to play, and where they are just adding an unnecessary additional tool into the process.

Who needs Flash anyway?

Thursday, August 6th, 2009

Twitter particle systems using HTML5

Also see:  Die IE6.

Curriculum design, guidance and Phoebe

Monday, June 15th, 2009

I recently demonstrated Phoebe to the curriculum design and delivery projects for JISC (if you are one of these projects you can access a recording of the talk here – otherwise there is an older video of me demoing it here).  Tim Linsey from Kingston University Blogged this and it is interesting to see that his conclusions about where Phoebe might be most useful very much chimed with our evaluations.

After not having done much with Phoebe for a while,  we are seriously looking at how we can use it in out curriculum delivery project, Cascade.  More specifically we are revisiting ways that we can make the Phoebe guidance more usable,  useful and sustainable, both for ourselves and as something that could be consumed by other tools or projects, especially in the context of the LDSE project, but also more widely.

So if you think you might be interested in this, do let us know. The more information we can gather about how people might want to use and develop this content the more likely we are to take it in directions that suit us all.

Only connect

Thursday, April 30th, 2009

In the last few months we have been laying the ground work for the Cascade project, but now that we have our research officer, Bridget Lewis, in place we are really moving forward with our work on this.

What is really apparent at this stage is how interconnected everything is, I appreciate that this is hardly a revelation, but when you are working on very tightly defined deliverables it is really easy to ignore the implications of your choices beyond the boundaries of what you are doing.  When a significant focus of your work is looking at the bigger picture things start getting tangled.

A positive aspect of this, is how much we are genuinely taking forward outputs of  other projects that we have done over the last few years, Mosaic, Isthmus and Phoebe in particular are proving to be directly relevant, and it is great to feel that we have achieved things with them that can really improve what we are doing now.

In particular:

  • Mosaic –  better understanding of OERs, licensing and staff development materials around reuse.
  • Isthmus – what we know about our online students (although Cascade is dealing with a much larger student body than Isthmus did ) and the implications of innovating on live courses.
  • Phoebe – the tool itself as well as what we know from it about course design.

There is also a lot of overlap between Cascade and the LDSE (Learning Design Support Environment) project that we are working on with several London-based partners.  With Cascade focusing on changes in the hear and now, while the LDSE is designing for the future, they each act as a sanity check on what we are doing on the other project.

What is learning content?

Thursday, March 26th, 2009

One of our key findings from Mosaic is that almost anything can be learning content. Yes learning objects are great if they exist, but  in many subjects they don’t, or if they do in about the right quantities to make up about 30 mins of learning.  For our Ancestral Voices course we used about 3 items that the creators would have classified as learning objects, but managed to create a 100 study hours course out of approximately 200 items of pre-existing high quality content from a variety of sources including:

  • Academic articles
  • Media articles (BBC etc)
  • Pod casts
  • Fully online courses
  • Online textbooks
  • Assets – Images/diagrams/maps etc
  • Databases (especially archaeological ones)
  • Sites developed by enthusiasts
  • Academic sites (departmental and individual)
  • Academic project sites
  • Museum sites
  • Blogs

These were not in repositories, usually had no special meta data, but they were discoverable through informed browsing and Google searches. While some of these map very closely onto the sort of content used in teaching and learning for decades, whether online or face to face, many do not.  However what is clear is that, if correctly scaffolded by the course, any content can be learning content.  Many of the discussions currently underway on developing repositories and standards, or more generally on approaches to sharing OERs in the future, work on the assumptions that learning content needs separate considerations , extra metadata and unique locations, something our experience contradicts (see previous posts about this).

Work on discovering, representing and sharing learning designs in particular suggests this is a complex field, and also a very personal one – there is no metadata schema, or standard or representation which can encapsulate the particular value of a particular learning design or item of content to all comers.  Where the value of these lies is individually derived and context specific (See the Mod4L report  for a discussion of this space in relation to learning design in particular).  Thus while improvements to standards and metadata, and development of specialised repositories are not in themselves negative, it seems likely that any benefit accrued by these undertakings is outweighed by the barriers to sharing and discoverability imposed by the extra complexity.  Note that it has been frequently observed that one of the main barriers to academics sharing is not intent (in theory they are happy to do so) but rather the complexity of the actual practice (they are not sure how to, where, don’t have time to consider metadata).  Materials openly available on the web are already found and used (legitimately or not) all the time, tapping into these existing locations and networks, seems more likely to lead to success then additional infrastructure.

Progression in games, learning

Friday, October 17th, 2008

Game Set Watch has an article about progression in games; mainly on character progression, rather than player progression – but still an interesting look at different approached to the tutorial phase.

Phase 2 plans for our Philosophers

Wednesday, October 1st, 2008

We learnt a few things in the first phase of the Open Habitat project which have informed the set-up of our next pilots. I’m currently planning the pilot that will run with philosophy students in Second Life. The main challenge with the first pilot was the sheer speed of debate in SL. The experienced philosophy students are used to being able to gather their thoughts, write a paragraph or two and pop it into a forum.

Taking the time to reflect is important in any educational process but it is especially precious to the discipline of philosophy. Having said this, the students loved the vibrant, social feeling of SL and the sense of presence being embodied in an avatar brought. In fact they liked it so much they have continued to run non-tutored sessions in SL once a week managed via a facebook group. (This included giving the students building rights so that they could rearrange the environment each week to fit the topic under discussion)

For phase 2 it was clear that we needed to balance the reflective and the dynamic which we are planning to do by ‘bookending’ the SL session with Moodle. Here is a draft of how the pilot will flow:

Stage One (framing the debate):

  1. Marianne (the tutor) to post briefing page on Moodle
  2. students to post kneejerk response in blog
  3. Marianne to respond one to one
  4. students to reconsider in light of Marianne’s comments and prepare second kneejerk
  5. second kneejerk to be posted on Moodle
  6. all students to read, think and prepare third kneejerk for posting on whiteboard in second life
  7. third kneejerk to be sent to Dave for posting in world

Stage two (dynamic in world discussion):

  1. Everyone arrives in second life to find third kneejerk responses on board
  2. People read these and reflect as everyone arrives
  3. Marianne asks each student in turn to comment
  4. after everyone has responded people go into groups (arranged in advance), go to their ‘stations’ and prepare jointly a ‘final statement’
  5. final statements to be sent to Dave
  6. Marianne reconvenes students and the session ends with a final discussion.

Stage three (reflection):

  1. Marianne to annotate final statements, and add comments
  2. Dave to post final statements and the chat log on Moodle
  3. Students free to discuss final statements and Marianne’s comments by themselves.

It’s not rocket science but I think this really takes advantage of what SL is good for and is a genuine answer to the ‘user needs’ that came out of phase 1. We will then run this cycle a second time either continuing the same philosophical theme or starting a new on depending on how well it runs!

The other significant change to the pilot will be the use of edu-gestures which should allow for more non-verbal communication whilst the group is deep in discussion. We have a nice set (agree, confused, yes, no, I’m thinking etc) of gestures that the students can use during the sessions using a ‘lite’ version of the Sloodle toolbar generously created for us by the Sloodle project. I’m planning to introduce these gestures as a key part of the orientation session so that their use is seen as a ‘basic’ skill. In this way I hope we get the benefits of embodiment/presence as well as the benefits of non-verbal communication which is so important in RL but has not really developed in detail within SL.

It’s odd to think that an environment that renders you as an avatar (face, head, arms, legs etc) does not rely very heavily on non-verbal cues (apart from where you are standing and the biggie: what you look like). I’m hoping that this aspect of Multi-User Virtual Environments will develop as the language of communication (text, voice, visual) within virtual worlds becomes more sophisticated.

Most importantly the pilot has been designed in conjunction with the students who are going to advise on the layout of the in world environment and are enthusiastic about the changes to the format.

Hope for font embedding on the web?..

Tuesday, July 22nd, 2008

Since the dawn of the web (approximately) people have wanted to use specific fonts in their web page designs. Initially they rendered their text as graphics files (bad for download sizes, and often bad for accessibility), then there were a competing font embedding systems (which never really took off due to browser incompatibilities and limited tools), and then the Flash text-replacement tools (which do the job, but are pretty clunky) and SVG fonts (with limited browser support).

None of these really have the appeal of genuinely embedding your fonts in a website – which should allow better designs and font usage, be more efficient for the user, and easier for the developer than any of the above options.

The Microsoft IEBlog has recently posted about a new effort to get font embedding working. There’s an education effort, and – more importantly at this point – it appears that they are opening up their EOT embedding solution in a W3C submission. This is the same system as from 10 years ago, but opening it up will hopefully allow other browser makers take it up, and other developers make (decent) tools to create EOT.

Håkon Wium Lie advocates a different approach to the problem in August 2007, advocating plain TrueType web fonts, and this has been included to Safari 3.1. It doesn’t have the file size advantages of EOT, but looks like a workable approach for free fonts. Unfortunately it isn’t so good for commercial font creators, as they their licensing restrictions on font distribution could be trampled over with this system.

Of course, there is a downside should this actually work out – it’ll be desktop publishing with dozens of fonts per page, all over again. Still, if EOT becomes a freely implementable standard, with decent tools (preferably free software), this will be a win for the web…