Adding subtitles to YouTube videos using CaptionTube

YouTube is a wonderful resource, it works on just about all internet enabled devices, it hardly ever goes wrong, it is easy to use and although there is a lot of low quality rubbish on there (in my opinion), there is also huge amounts of really useful high quality videos that we can use in education to enhance our teaching and learning practices.

A feature of YouTube that many don’t know about, is the auto-captioning option – in other words YouTube creates a transcript of the video without you having to do anything. If you are watching a video on the YouTube page and you want to see the captions, then there is a button below the video (currently to the right of the where it says ‘add to’) which is the transcript button – this brings up the transcript as a timeline below the videos and automatically advances with the video. This can be great for learners that have a disability (e.g. are deaf), but can also be really useful to find a key point within a video.

For example I often use short sections of the excellent TED talk video of Ken Robinson talking about schools killing creativity. If I want to locate a certain section within that video, I use the automatic captions that appear below it to locate the section that I want.

Because the transcripts are computer generated, they do contain errors – and depending on the clarity of the voice and the background noise of the video will determine the accuracy of the transcript. For some reason my voice never does well with automated speech to text systems, including YouTube.

However if you do want to override the automatic captions that YouTube creates with your own ones, then this is very easy to do – and for this I use a service called CaptionTube This is a simple system where you sign in (using a Google Account) you locate the video you want to caption (which could be your own or someone elses) and then you play the video pausing it at intervals to add your captions. If the video is your own, then you can add the captions to it there and then, if it isn’t your video then you can send the transcript to them to see if they want to upload it.

The following video (by John Skidgel) introduces the basics of CaptionTube.

Here is a video of mine that I captioned using this method. This took me 12 minutes in total from opening the page to my captions appearing on the video on YouTube.

Adding Captions to a video is a simple way to increase the accessibility of a resource, as well as potentially increasing the number of people that see your video, as the contents of the captions will get picked up by search engines (if the video is set to being public and listed).

Advertisements

Using a colour combination chart when creating resources

In my early days of teaching, and just as I was starting to get my head around the tools that were available to me (PowerPoint) – I was faced with a multitude of colours that I could choose as background or font. The problem is that certain colours don’t go very well together. Some are obvious – such as having dark text on a dark background, or light text on a light background (but I am still suprised how I often I see this mistake made), and others are less obvious like using green and red or blue and red.

Then I created a very simple tool that helped me when choosing colours, and saved myself time in the process. I created a grid where I had a variety of combinations of backgrounds, and fonts in each of the different colour combinations – by glancing at this, I can then see which colour combinations work better than others, without having to keep changing the settings until I get something that works. This grid was stuck to the wall next to my desk.

I also used this when I had a student with a visual impairment in my class. I took the grid to him, and asked him which colour combination he found best – he looked at the grid and quickly said black text on an orange background. So I quickly changed the colour schemes of my presentations for that unit (which because I had used the Master Slide was very quick to do) – and as a result of that (and other simple changes I made to my teaching) – in my sessions he didn’t need to have the note taker that he needed in all the other lessons he attended – which for him, was a wonderful experience (as well as saving the College lots of money).

The grid (which I still use) is available for others if they want to use, and can be located on my website towards the bottom of http://www.a6training.co.uk/resources_powerpoint.php. Although I originally produced this for the use of PowerPoint, this works with any technology where you have the option to change colours, and can be a really useful way of increasing accessibility of learning resources.

The video below introduces this chart.

The accessibility benefits of using audio in teaching and learning

My last 2 posts in this blog, have looked at the use of audio within teaching and learning as follows:

In this post I want to think about some of the accessibility benefits and issues for this area of work. Firstly (and sadly) a lot of people use accessibility as an excuse not to use audio more, often stating that they don’t want to use audio in case they have a deaf learner in their class.
No hay más ciego que el que no quiere ver...

This is a commonly misunderstood area – as (although it may sound silly) many deaf learners actually find audio content really useful. Agreed, if someone has no hearing then audio is of no use to them, but many people that are identified as being deaf have some hearing, and many of them will have hearing aids that will amplify noise so that they can hear something. For these learners accessing a lecture is often difficult as the hearing aids will pick up all noise in the room, including the lecturers voice, mutterings from other learners, the hum of the strip lighting, the squeaking of pens etc. making it very hard to hear. The use of induction loops has helped this issues to a certain extent, but still isn’t ideal, as you hear the tutors contribution but not necessarily what other learners say. If however as part of the process an audio file has been produced summarising the key learning points (and remember you can always get the learners to do this – see my post on ipadio) – a deaf learner can in a quiet environment (e.g. their bedroom) listen to this – altering the volume accordingly so that they can hear it through an ordinary audio player and pick up some extremely valuable information that would otherwise have been missed. With this in mind, when I record audio, I amplify the recording slightly (which is very easy to do with things like audacity) – as it is always possible to turn the volume down, but it isn’t possible to turn the volume up above its maximum.

Another question that I am often asked is “Do we need to provide a transcript for all the audio recordings we make?”. This question again is another barrier (excuse) that stops people using audio, as the time to create such a transcript puts them off.

Personally, my interpretation of the law (and I am not a lawyer – so do not take this as legal advice) is as follows:

The law (Equality Act 2010) states that

  1. We have to make reasonable adjustment.
  2. We have an anticipatory duty, to be prepared for a variety of disabilities and associated possible adaptations.

So – what does that mean? Part 1 is simple – yes if we are using audio and we have a learner who states that they need a transcription then we need to provide it.

Part 2 – is more tricky. Many people interpret this as we should produce the transcripts upfront, but this doesn’t make sense to me, as if we followed that logic through it would mean for every session someone delivers, they would have to turn up with their resources printed on a variety of colours of paper, in different fonts sizes and fonts, and in audio format, Braille format, videoed as British Sign Language etc. This to me wouldn’t be a reasonable expectation.

So I don’t automatically provide transcripts, however what I have done with a few organisations that I have worked with is ensure that there is a mechanism in place to create such transcripts if required (e.g. if a student asks for one) – very quickly (this I think is a reasonable adjustment). Many people ask me if there is a technological solution to converting audio to text, and sadly there isn’t a fully reliable one, so my solution would be to just employ a very fast typing temp, who could listen to the audio files, and transcribe them. The problem here is many organisation would then argue over whose budget would pay for this temp (e.g. is it central, is it the teaching team(s), is it learner support). Personally I think it should be a central budget – as the key here would be getting it done quickly and the organisation should have a procedure in place to do this if a student requests it. I am aware that most organisations don’t, which is a shame as it would be a very simple mechanism to put in place.

So in summary – we should not avoid using audio on accessibility grounds, instead we should embrace it, if we can make sure that audio recordings are clear and recorded at above average volume then great. If we do produce transcripts up front then brilliant but of not possible then having a mechanism in place to create them should suffice.

‘Flipping eL’ – ‘The Flipped Classroom’ – part 4 – increasing accessibility

This is the 4th blog post in a series on the topic of the ‘Flipped Classroom’. In the previous post I identified the possible benefits that this model could bring in terms of widening participation, with one area being possible benefits to disabled students, which I want to unpick further here.

Not all disabled learners are going to prefer this model, but there are going to be many that may. The commonest disability that we are likely to encounter within education is dyslexia, and if I broaden this out there are 3 categories:

  1. Those that are diagnosed.
  2. Those that aren’t diagnosed yet.
  3. Those who have dyslexic tendencies without being officially recognised.

Organisations will know how many of the first group they have, but we have no idea how many of the other 2 groups that there will be. The advantage of the flipped classroom is this may help these learners without them having something different (and the associated stigma) to the rest of the class.

So – how may this help? Education discriminates against dyslexic learners because education has an over reliance on the importance of language – the very thing that dyslexics find most challenging. Many dyslexics are above average intelligence, and they develop coping strategies to overcome this discrimination. For many the coping strategy is based on ‘getting through the lesson’ and they have to spend a lot of their effort just making sense of various forms of information and the way it is presented, rather than trying to develop a deep understanding of the topic being taught. If we have a flipped classroom scenario – the learner can use tools like text to speech, coloured overlays, onscreen reading bars etc to help them access the information. If any content is video or audio the learner can pause, rewind etc. giving them the extra time needed to make meaningful notes – rather than having to work at the pace the tutor has set.

If a learner accesses the information up front, they can then come to the seminar session prepared to enter into high level discussion with their peers.

In my first post in the series, I mentioned how a visually impaired learner may benefit, and the logical reasons are the same – they take control of their support needs and adaptations, they work at the pace that is best for them, and in an environment that they find most conducive to learning. Similarly if a learner has a hearing impairment, they can study in an environment (probably one without background noise) that is best suited to them. One issue that is interesting here, is many educators think that audio (including video) is not going to be good for someone that is deaf. If someone has no hearing then this would be true, however there are many people that are deaf that have some hearing, and make use of hearing aids to amplify sounds to give them some hearing. For these people – listening to a lecturer speaking in a lecture theatre with the sound of the air conditioning, projector, students coughing, squeaking of pens taking notes etc is very hard, even if an induction loop is used. However – if the same content is provided as audio or video, which they can listen to in their own environment and using technology to amplify the sound to the best level for them – again with the ability to pause and rewind bits they didn’t fully hear is very useful.

Of course there will be many disabled learners who won’t prefer the ‘flipped classroom’ ideology – but the key here is the notion of choice that I identified in my second post in the series. The learners who prefer the flipped route sign up for courses running it, for those that don’t they sign up for an alternative course. The challenge here will be for the people advising the students, to have the confidence and knowledge to advise the students correctly, without introducing a prejudice that ‘because they are disabled the flipped classroom won’t work for them’.

If some organisations do get the flipped principle working, and the right students choose those courses, we have the potential for some disabled learners to be able to study without having to declare their disability (and the associated stigma) and that to me is a very powerful and exciting position.

The image below shows a student using technology to invert the colours to high contrast and increase the text size.

Taken from http://farm5.static.flickr.com/3073/3050414114_f2ae8bb229_b.jpg on 2012-5-14
Original URL – http://www.flickr.com/66606673@N00/3050414114/ created on 2008-10-20 10:43:05
cobalt123CC BY-NC-SA 2.0

m-learning – ‘the great accessibility enabler’

Earlier this year, I was honoured to be asked to do a keynote presentation at an m-learning event organised by JISC RSC-Eastern. Earlier in the day James Clay had opened proceedings with his keynote, and I was closing with mine – although I may be bias I think an excellent combination of presenters.

James videoed the session, which he has uploaded to his blog, which is great for me, as I can use this to reflect on my own presentation technique – for example due to the day overrunning slightly, I was slightly late starting, so was concerned that I went a bit too quick to compensate – however watching the video I realise that the pace was OK. I have also reduced the number of uhms from my presentation style (which used to be prevelant in my earlier days) and although I am constantly moving around – I  don’t fidgit as much as I used to. I haven’t seen a video of myself presenting for over 2 years now, so this has been a really useful exercise.

Voice to text for free using SpinVox

There are various options for software that will convert speech to text, the best known being Dragon produced by Nuance, and I am often asked if there are any free alternatives, which up until now I have not been able to suggest.

For the last few weeks  I have been using ping.fm as a conduit that allows me to update the social networking sites Twitter, Jaiku and Facebook at the same time, and this has a feature where I can use SpinVox with it, and update these sites by talking to my phone, and what I say gets converted to text.

This made me then think about whether this could be used to allow a learner to speak into their phone and have what they say converted to text for them, and by using SpinVox directly, this is possible – you go onto the site, create an account, and then have various options, including the ‘memo’ feature which will convert what they say onto an email, of the blog feature which will send what they say to a blog.

Now I think that this is rather good and could have various educationaly uses. For a learner that has difficulty with their writing or spelling, and doesn’t have access to some of the more sophistiacted software then this can be used to overcome this problem. Or an ESOL student, could talk to their phone in English, and then see how well it understands what they are saying, to practive their pronunciation.

The voice recognition of   SpinVox is not as good as Dragon , so this isn’t a replacement for it, but the potential here for learners, to use their phones for this sort of activity is amazing.

Accessible Apps – the best thing this year!


Accessible Apps

Originally uploaded by Dave Foord

A few weeks ago something called Accessible Apps was launched. This is a collection of free tools, that you have on a pen drive, so when you plug the pen drive into the computer, all of the tools are there to be used, without any problems with profiles, downloading software, permission etc. This is brilliant for learners with disabilities, but could be used by any learners – for example it has audacity on it – a program for creating sound recordings.

the only cost associated is the buying of the pen drives, which nowadays are very cheap, and if you buy in bulk can even be stamped with the college, university or school logo, so I think it is viable to give every learner one of these pen drives when they start a course. To download your own version of accessible apps, you will need a 2G memory pen (for the full package) and then go to http://www.rsc-ne-scotland.ac.uk/accessapps/