Adding subtitles to YouTube videos using CaptionTube

YouTube is a wonderful resource, it works on just about all internet enabled devices, it hardly ever goes wrong, it is easy to use and although there is a lot of low quality rubbish on there (in my opinion), there is also huge amounts of really useful high quality videos that we can use in education to enhance our teaching and learning practices.

A feature of YouTube that many don’t know about, is the auto-captioning option – in other words YouTube creates a transcript of the video without you having to do anything. If you are watching a video on the YouTube page and you want to see the captions, then there is a button below the video (currently to the right of the where it says ‘add to’) which is the transcript button – this brings up the transcript as a timeline below the videos and automatically advances with the video. This can be great for learners that have a disability (e.g. are deaf), but can also be really useful to find a key point within a video.

For example I often use short sections of the excellent TED talk video of Ken Robinson talking about schools killing creativity. If I want to locate a certain section within that video, I use the automatic captions that appear below it to locate the section that I want.

Because the transcripts are computer generated, they do contain errors – and depending on the clarity of the voice and the background noise of the video will determine the accuracy of the transcript. For some reason my voice never does well with automated speech to text systems, including YouTube.

However if you do want to override the automatic captions that YouTube creates with your own ones, then this is very easy to do – and for this I use a service called CaptionTube This is a simple system where you sign in (using a Google Account) you locate the video you want to caption (which could be your own or someone elses) and then you play the video pausing it at intervals to add your captions. If the video is your own, then you can add the captions to it there and then, if it isn’t your video then you can send the transcript to them to see if they want to upload it.

The following video (by John Skidgel) introduces the basics of CaptionTube.

Here is a video of mine that I captioned using this method. This took me 12 minutes in total from opening the page to my captions appearing on the video on YouTube.

Adding Captions to a video is a simple way to increase the accessibility of a resource, as well as potentially increasing the number of people that see your video, as the contents of the captions will get picked up by search engines (if the video is set to being public and listed).

Advertisements

The accessibility benefits of using audio in teaching and learning

My last 2 posts in this blog, have looked at the use of audio within teaching and learning as follows:

In this post I want to think about some of the accessibility benefits and issues for this area of work. Firstly (and sadly) a lot of people use accessibility as an excuse not to use audio more, often stating that they don’t want to use audio in case they have a deaf learner in their class.
No hay más ciego que el que no quiere ver...

This is a commonly misunderstood area – as (although it may sound silly) many deaf learners actually find audio content really useful. Agreed, if someone has no hearing then audio is of no use to them, but many people that are identified as being deaf have some hearing, and many of them will have hearing aids that will amplify noise so that they can hear something. For these learners accessing a lecture is often difficult as the hearing aids will pick up all noise in the room, including the lecturers voice, mutterings from other learners, the hum of the strip lighting, the squeaking of pens etc. making it very hard to hear. The use of induction loops has helped this issues to a certain extent, but still isn’t ideal, as you hear the tutors contribution but not necessarily what other learners say. If however as part of the process an audio file has been produced summarising the key learning points (and remember you can always get the learners to do this – see my post on ipadio) – a deaf learner can in a quiet environment (e.g. their bedroom) listen to this – altering the volume accordingly so that they can hear it through an ordinary audio player and pick up some extremely valuable information that would otherwise have been missed. With this in mind, when I record audio, I amplify the recording slightly (which is very easy to do with things like audacity) – as it is always possible to turn the volume down, but it isn’t possible to turn the volume up above its maximum.

Another question that I am often asked is “Do we need to provide a transcript for all the audio recordings we make?”. This question again is another barrier (excuse) that stops people using audio, as the time to create such a transcript puts them off.

Personally, my interpretation of the law (and I am not a lawyer – so do not take this as legal advice) is as follows:

The law (Equality Act 2010) states that

  1. We have to make reasonable adjustment.
  2. We have an anticipatory duty, to be prepared for a variety of disabilities and associated possible adaptations.

So – what does that mean? Part 1 is simple – yes if we are using audio and we have a learner who states that they need a transcription then we need to provide it.

Part 2 – is more tricky. Many people interpret this as we should produce the transcripts upfront, but this doesn’t make sense to me, as if we followed that logic through it would mean for every session someone delivers, they would have to turn up with their resources printed on a variety of colours of paper, in different fonts sizes and fonts, and in audio format, Braille format, videoed as British Sign Language etc. This to me wouldn’t be a reasonable expectation.

So I don’t automatically provide transcripts, however what I have done with a few organisations that I have worked with is ensure that there is a mechanism in place to create such transcripts if required (e.g. if a student asks for one) – very quickly (this I think is a reasonable adjustment). Many people ask me if there is a technological solution to converting audio to text, and sadly there isn’t a fully reliable one, so my solution would be to just employ a very fast typing temp, who could listen to the audio files, and transcribe them. The problem here is many organisation would then argue over whose budget would pay for this temp (e.g. is it central, is it the teaching team(s), is it learner support). Personally I think it should be a central budget – as the key here would be getting it done quickly and the organisation should have a procedure in place to do this if a student requests it. I am aware that most organisations don’t, which is a shame as it would be a very simple mechanism to put in place.

So in summary – we should not avoid using audio on accessibility grounds, instead we should embrace it, if we can make sure that audio recordings are clear and recorded at above average volume then great. If we do produce transcripts up front then brilliant but of not possible then having a mechanism in place to create them should suffice.

‘Flipping eL’ – ‘The Flipped Classroom’ – part 4 – increasing accessibility

This is the 4th blog post in a series on the topic of the ‘Flipped Classroom’. In the previous post I identified the possible benefits that this model could bring in terms of widening participation, with one area being possible benefits to disabled students, which I want to unpick further here.

Not all disabled learners are going to prefer this model, but there are going to be many that may. The commonest disability that we are likely to encounter within education is dyslexia, and if I broaden this out there are 3 categories:

  1. Those that are diagnosed.
  2. Those that aren’t diagnosed yet.
  3. Those who have dyslexic tendencies without being officially recognised.

Organisations will know how many of the first group they have, but we have no idea how many of the other 2 groups that there will be. The advantage of the flipped classroom is this may help these learners without them having something different (and the associated stigma) to the rest of the class.

So – how may this help? Education discriminates against dyslexic learners because education has an over reliance on the importance of language – the very thing that dyslexics find most challenging. Many dyslexics are above average intelligence, and they develop coping strategies to overcome this discrimination. For many the coping strategy is based on ‘getting through the lesson’ and they have to spend a lot of their effort just making sense of various forms of information and the way it is presented, rather than trying to develop a deep understanding of the topic being taught. If we have a flipped classroom scenario – the learner can use tools like text to speech, coloured overlays, onscreen reading bars etc to help them access the information. If any content is video or audio the learner can pause, rewind etc. giving them the extra time needed to make meaningful notes – rather than having to work at the pace the tutor has set.

If a learner accesses the information up front, they can then come to the seminar session prepared to enter into high level discussion with their peers.

In my first post in the series, I mentioned how a visually impaired learner may benefit, and the logical reasons are the same – they take control of their support needs and adaptations, they work at the pace that is best for them, and in an environment that they find most conducive to learning. Similarly if a learner has a hearing impairment, they can study in an environment (probably one without background noise) that is best suited to them. One issue that is interesting here, is many educators think that audio (including video) is not going to be good for someone that is deaf. If someone has no hearing then this would be true, however there are many people that are deaf that have some hearing, and make use of hearing aids to amplify sounds to give them some hearing. For these people – listening to a lecturer speaking in a lecture theatre with the sound of the air conditioning, projector, students coughing, squeaking of pens taking notes etc is very hard, even if an induction loop is used. However – if the same content is provided as audio or video, which they can listen to in their own environment and using technology to amplify the sound to the best level for them – again with the ability to pause and rewind bits they didn’t fully hear is very useful.

Of course there will be many disabled learners who won’t prefer the ‘flipped classroom’ ideology – but the key here is the notion of choice that I identified in my second post in the series. The learners who prefer the flipped route sign up for courses running it, for those that don’t they sign up for an alternative course. The challenge here will be for the people advising the students, to have the confidence and knowledge to advise the students correctly, without introducing a prejudice that ‘because they are disabled the flipped classroom won’t work for them’.

If some organisations do get the flipped principle working, and the right students choose those courses, we have the potential for some disabled learners to be able to study without having to declare their disability (and the associated stigma) and that to me is a very powerful and exciting position.

The image below shows a student using technology to invert the colours to high contrast and increase the text size.

Taken from http://farm5.static.flickr.com/3073/3050414114_f2ae8bb229_b.jpg on 2012-5-14
Original URL – http://www.flickr.com/66606673@N00/3050414114/ created on 2008-10-20 10:43:05
cobalt123CC BY-NC-SA 2.0