Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-11 Thread Dave Singer

At 0:02  -0400 10/06/07, Brian Campbell wrote:

On Jun 9, 2007, at 5:26 PM, Dave Singer wrote:

I have to confess I saw the BBC story about sign-language soon 
after sending this round internally.  But I need to do some study 
on the naming of sign languages and whether they have ISO codes.  
Is it true that if I say that the human language is ISO 639-2 code 
XXX, and that it's signed, there is only one choice for what the 
sign language is (I don't think so -- isn't american sign language 
different from british)?  Alternatively, are there ISO or IETF 
codes for sign languages themselves?


Almost no sign languages are related to the spoken language in the 
same region any more than any two spoken languages are related to 
each other.


OK, but are they often, sometimes, or never geographically 
co-located?  i.e. if there are N spoken languages in the world and M 
sign languages, do we really have MxN possibilities of what a given 
sign-language-capable person can do?


Sign languages are full-fledged languages in their own right, not 
signed transliterations of spoken language (though they do 
frequently have an alphabet system for signing words and names from 
spoken languages). So, American Sign Language is not actually 
related to English any more than other languages spoken in America 
are (like Cherokee or Spanish).


The situation with the ISO 639-2 codes is unfortunate, because there 
is only a single code for all sign languages, sgn. It appears that 
the solution is to add extensions specifying the actual language, 
such as sgn-US or sgn-UK. There's more information available here: 



That is truly unfortunate, however, I guess it's not in scope for 
HTML to solve.


At 12:05  +0100 10/06/07, Benjamin Hawkes-Lewis wrote:

The proposal does not describe how conflicts such as the following
 would be resolved:

User specifies:

captions: want high-contrast-video: want

Author codes:

   


There is no suitable source here;  it's best to have something (late)
in the list which is less restrictive.


But if UAs can apply accessibility preferences to a catch-all 
listed last, then what's the advantage of creating multiple 
elements in the first place?


There are two common cases to consider:
a) the accessibility option is 'burned in' to the media (e.g. burned 
in captions);  you then need to select the right one
b) the media is adaptable (e.g. tracks that can be enabled in a QT 
movie);  you then need to select it and adapt it.


It may be that there isn't an accessible version of the movie 
suitable for your accessibility needs (it does hapen sometimes -:(). 
It might be prudent to author the page so that such users get to see 
something.



Current container formats can
include captions and audio descriptions. So is the problem we're trying
to solve that container formats don't contain provision for alternate
visual versions (high contrast and not high contrast)? Or are we 
trying to cut down on bandwidth wastage by providing videos 
containing only the information the end-user wants?


Both, I think.




a) I should think sign-language interpretation needs to be in
there.

sign-interpretation: want | dont-want | either (default: want)

Unless we want to treat sign interpretation as a special form of 
subtitling. How is subtitling in various languages to be handled?


I think we assume that a language attribute can also be specified, as
 today.


The lang attribute specifies "the primary language for the element's 
contents and for any of the element's attributes that contain text", 
not the referenced resource. hreflang "gives the language of the 
linked resource" as a single "valid RFC 3066 language code." So we'd 
need a new attribute or to change the content model of hreflang to 
explicitly specify the separate multiple languages of a resource.


http://www.whatwg.org/specs/web-apps/current-work/multipage/section-global.html#the-lang

http://www.whatwg.org/specs/web-apps/current-work/multipage/section-links.html#hreflang3

I note in passing that these attributes should be updated to use RFC 
4646 not RFC 3066 as per:


http://www.w3.org/TR/i18n-html-tech-lang/#ri20030112.224623362


This could quickly become unworkable, and I feel that perhaps it 
would be best to drop into a composition language such as SMIL, where 
one can then explicitly ask for a 'par' of two selections, one 
selecting the audio (by spoken language) and one the signing (by sign 
language).





I have to confess I saw the BBC story about sign-language soon after
sending this round internally.  But I need to do some study on the 
naming of sign languages and whether they have ISO codes.  Is it 
true

that if I say that the human language is ISO 639-2 code XXX, and
that it's signed, there is only one choice for what the sign language
is (I don't think so -- isn't american sign language different from
british)? Alternatively, are there ISO or IETF codes for sign
languages the

Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-11 Thread Gervase Markham

Benjamin Hawkes-Lewis wrote:
I honestly don't think the property values are well-named. "either" is 
confusing and vague; "dont-want" is a misspelled colloquialism. How 
about one of the following possibilities:


captions: wanted
captions: unwanted
captions: no-preference


What happened to "yes", "no", "default"/"inherit"?

"Wanted" seems to indicate a preference - but all stylesheets indicate a
preference, because they can be overridden. So there's no need to
include that principle again in the names.

Unless we want to treat sign interpretation as a special form of 
subtitling. How is subtitling in various languages to be handled?


That seems to make sense to me, apart from the point that perhaps it
would be implemented differently, as it's not text-based.

Gerv



Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-10 Thread Benjamin Hawkes-Lewis

I wrote:

The crudest way of doing this would be to provide transcriptions of 
audio descriptions to supplement the captions. I believe one can do that 
with SMIL; I don't know what the situation with other container formats 
or player UIs is however.


I just ran across the Open and Closed Project's proposal for a XEX 
format including "audio description scripts", which is precisely the 
sort of inclusion I mentioned:


http://openandclosed.org/docs/AccessibilityExchange.html

--
Benjamin Hawkes-Lewis


Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-10 Thread Benjamin Hawkes-Lewis

Dave Singer wrote:


At 16:35  +0100 9/06/07, Benjamin Hawkes-Lewis wrote:


[snip]


The proposal does not describe how conflicts such as the following
 would be resolved:

User specifies:

captions: want high-contrast-video: want

Author codes:

   


There is no suitable source here;  it's best to have something (late)
in the list which is less restrictive.


But if UAs can apply accessibility preferences to a catch-all 
listed last, then what's the advantage of creating multiple 
elements in the first place? Current container formats can
include captions and audio descriptions. So is the problem we're trying
to solve that container formats don't contain provision for alternate
visual versions (high contrast and not high contrast)? Or are we trying 
to cut down on bandwidth wastage by providing videos containing only the 
information the end-user wants?



a) I should think sign-language interpretation needs to be in
there.

sign-interpretation: want | dont-want | either (default: want)

Unless we want to treat sign interpretation as a special form of 
subtitling. How is subtitling in various languages to be handled?


I think we assume that a language attribute can also be specified, as
 today.


The lang attribute specifies "the primary language for the element's 
contents and for any of the element's attributes that contain text", not 
the referenced resource. hreflang "gives the language of the linked 
resource" as a single "valid RFC 3066 language code." So we'd need a new 
attribute or to change the content model of hreflang to explicitly 
specify the separate multiple languages of a resource.


http://www.whatwg.org/specs/web-apps/current-work/multipage/section-global.html#the-lang

http://www.whatwg.org/specs/web-apps/current-work/multipage/section-links.html#hreflang3

I note in passing that these attributes should be updated to use RFC 
4646 not RFC 3066 as per:


http://www.w3.org/TR/i18n-html-tech-lang/#ri20030112.224623362


I have to confess I saw the BBC story about sign-language soon after
sending this round internally.  But I need to do some study on the 
naming of sign languages and whether they have ISO codes.  Is it true

that if I say that the human language is ISO 639-2 code XXX, and
that it's signed, there is only one choice for what the sign language
is (I don't think so -- isn't american sign language different from
british)? Alternatively, are there ISO or IETF codes for sign
languages themselves?


Brian Campbell has eloquently answered some of these questions.

The reason I was thinking of using a CSS property was that signed 
interpretation is not the same as signing featured in the original 
video. But it's true that information about what sign languages are 
available is important, so a CSS property alone wouldn't solve the 
problem. Maybe we need new attributes to crack this nut:


dubbinglangs="fr" subtitlelangs="de,it" 
signedinterpretationlangs="sgn-en,sgn-fr,sgn-de,sgn-it" ...>


This would indicate that the main video content features people talking 
in English and people signing in English; the video is captioned in 
English, French, German, Italian, and their SignWriting analogues 
(American Sign Language in the case of English), dubbed in French, 
subtitled in German and Italian, and provided with signed interpretation 
in American, French, German and Italian Sign Languages.


Granted it's a sledgehammer, but it does provide the fine-grained 
linguistic information we need. It would also seemingly remove the need 
for putting a caption media query on . While this markup looks 
complicated, most videos currently on the web could be marked up like:




as all they provide is a single-language spoken track.



I should add a little note about "sgn-en-sgnw". The IANA language tag 
registry includes the following entry:



Type: script
Subtag: Sgnw
Description: SignWriting
Added: 2006-10-17


http://www.iana.org/assignments/language-subtag-registry

One might want to omit the sgnw subtag on the basis that other sign 
language transliterations are academic not everyday (just as one omits 
the latn subtag for en, fr, and so on). However, those who work on such 
things have yet to come up with an entirely settled formulation. See 
this thread on the IETF languages mailing list:


http://www.alvestrand.no/pipermail/ietf-languages/2006-October/005126.html

Meanwhile, people are already creating SignWriting captions:

http://www.webcitation.org/5PUMLS0mp




b) Would full descriptive transcriptions (e.g. for the deafblind)
fit into this media feature-based scheme or not?

transcription: want | dont-want | either (default: either)


how are these presented to a deafblind user?


Depends. I think the ideal would be to have transcriptions inside a 
container format, so that /everyone/ could access them and so that 
deafbind people who still have some sight can see some of the video. The 
transcriptions could be dispatched to a braille display. And, yeah, with 
my sledgehammer 

Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-09 Thread Brian Campbell

On Jun 9, 2007, at 5:26 PM, Dave Singer wrote:

I have to confess I saw the BBC story about sign-language soon  
after sending this round internally.  But I need to do some study  
on the naming of sign languages and whether they have ISO codes.   
Is it true that if I say that the human language is ISO 639-2 code  
XXX, and that it's signed, there is only one choice for what the  
sign language is (I don't think so -- isn't american sign language  
different from british)?  Alternatively, are there ISO or IETF  
codes for sign languages themselves?


Almost no sign languages are related to the spoken language in the  
same region any more than any two spoken languages are related to  
each other. Sign languages are full-fledged languages in their own  
right, not signed transliterations of spoken language (though they do  
frequently have an alphabet system for signing words and names from  
spoken languages). So, American Sign Language is not actually related  
to English any more than other languages spoken in America are (like  
Cherokee or Spanish).


The situation with the ISO 639-2 codes is unfortunate, because there  
is only a single code for all sign languages, sgn. It appears that  
the solution is to add extensions specifying the actual language,  
such as sgn-US or sgn-UK. There's more information available here:  



Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-09 Thread Dave Singer

At 16:35  +0100 9/06/07, Benjamin Hawkes-Lewis wrote:

Dave Singer wrote:
we promised to get back to the whatwg with a proposal for a way to 
handle accessibility for timed media, and here it is.  sorry it 
took a while...


Three cheers for Apple for trying to tackle some of the 
accessibility issues around video content! :)


Many thanks for all your helpful comments!

Without trying to assess whether CSS media queries are the best 
approach generally, here's three particular issues I wanted to raise:


1. Property values

I honestly don't think the property values are well-named. "either" 
is confusing and vague; "dont-want" is a misspelled colloquialism.




We struggled with this also;  suggestions are welcome.


How about one of the following possibilities:

captions: wanted
captions: unwanted
captions: no-preference

(This seems more natural to me than the original proposal.)

/or/

captions: prefer
captions: prefer-not
captions: no-preference

(Has the consistency of using the same word as the basis for each 
value. OTOH "prefer-not" and "no-preference" may be confusing if 
your English isn't that good.)


/or/

captions: desired
captions: undesired
captions: no-preference

("desire" has the minor advantages of being in Ogden's basic English 
word list and being common to Romance languages thanks to a Latin 
root. OTOH it's slightly longer.)


nice (in my personal opinion).



2. Conflict resolution

The proposal does not describe how conflicts such as the following 
would be resolved:


User specifies:

captions: want
high-contrast-video: want

Author codes:


  
  




There is no suitable source here;  it's best to have something (late) 
in the list which is less restrictive.


Because style rules cascade, this sort of conflict doesn't matter 
when media queries are applied to styles. But you can only view one 
video source.


3. (Even more) special requirements

The suggested list of media features is (self-confessedly) not 
exhaustive. Here's some things that seem to be missing:


a) I should think sign-language interpretation needs to be in there.

sign-interpretation: want | dont-want | either (default: want)

Unless we want to treat sign interpretation as a special form of 
subtitling. How is subtitling in various languages to be handled?


I think we assume that a language attribute can also be specified, as today.

I have to confess I saw the BBC story about sign-language soon after 
sending this round internally.  But I need to do some study on the 
naming of sign languages and whether they have ISO codes.  Is it true 
that if I say that the human language is ISO 639-2 code XXX, and that 
it's signed, there is only one choice for what the sign language is 
(I don't think so -- isn't american sign language different from 
british)?  Alternatively, are there ISO or IETF codes for sign 
languages themselves?




b) Would full descriptive transcriptions (e.g. for the deafblind) 
fit into this media feature-based scheme or not?


transcription: want | dont-want | either (default: either)


how are these presented to a deafblind user?



c) How about screening out visual content dangerous to those with 
photosensitive epilepsy, an problem that has just made headlines in 
the UK:


http://news.bbc.co.uk/2/hi/uk_news/england/london/6724245.stm

Perhaps:

max-flashes-per-second:  | any (default: 3)

Where the UA must not show visual content if the user is selecting 
for a lower number of flashes per second. By default UAs should be 
configured not to display content which breaches safety levels; the 
default value should be 3 /not/ any.


I think we'd prefer not to get into quantitative measures here, but a 
boolean "this program is unsuitable for those prone to epilepsy 
induced by flashing lights" might make sense.  epilepsy: dont-want -:)





Compare:

http://www.w3.org/TR/2007/WD-WCAG20-TECHS-20070517/Overview.html#G19

d) Facilitating people with cognitive disabilities within a media 
query framework is trickier. Some might prefer content which has 
been stripped down to simple essentials. Some might prefer content 
which has extra explanations. Some might benefit from a media query 
based on reading level. Compare the discussion of assessing 
readability levels at:


http://juicystudio.com/services/readability.php

reading-level:  | basic | average | complex | any (default: any)

Where the integer would be how many years of schooling it would take 
an average person to understand the content: basic could be (say) 9, 
average could be 12, and complex could be 17 (post-graduate).


This wouldn't be easily testable, but it might be useful nevertheless.


Yes, this isn't testable, and is quantitative.



Postscript: This isn't an accessibility issue but /if/ media queries 
are adopted as a mechanism for serving up the best content for a 
person's abilities, I wonder if they could also be used to enhance 
parental control systems using queries based on PICS:


http://www.w3.org/PICS/

So for e

Re: [whatwg] accessibility management for timed media elements, proposal

2007-06-09 Thread Benjamin Hawkes-Lewis

Dave Singer wrote:
we promised to get back to the whatwg with a proposal for a way to 
handle accessibility for timed media, and here it is.  sorry it took a 
while...


Three cheers for Apple for trying to tackle some of the accessibility 
issues around video content! :) Without trying to assess whether CSS 
media queries are the best approach generally, here's three particular 
issues I wanted to raise:


1. Property values

I honestly don't think the property values are well-named. "either" is 
confusing and vague; "dont-want" is a misspelled colloquialism. How 
about one of the following possibilities:


captions: wanted
captions: unwanted
captions: no-preference

(This seems more natural to me than the original proposal.)

/or/

captions: prefer
captions: prefer-not
captions: no-preference

(Has the consistency of using the same word as the basis for each value. 
OTOH "prefer-not" and "no-preference" may be confusing if your English 
isn't that good.)


/or/

captions: desired
captions: undesired
captions: no-preference

("desire" has the minor advantages of being in Ogden's basic English 
word list and being common to Romance languages thanks to a Latin root. 
OTOH it's slightly longer.)


2. Conflict resolution

The proposal does not describe how conflicts such as the following would 
be resolved:


User specifies:

captions: want
high-contrast-video: want

Author codes:


  
  



Because style rules cascade, this sort of conflict doesn't matter when 
media queries are applied to styles. But you can only view one video source.


3. (Even more) special requirements

The suggested list of media features is (self-confessedly) not 
exhaustive. Here's some things that seem to be missing:


a) I should think sign-language interpretation needs to be in there.

sign-interpretation: want | dont-want | either (default: want)

Unless we want to treat sign interpretation as a special form of 
subtitling. How is subtitling in various languages to be handled?


b) Would full descriptive transcriptions (e.g. for the deafblind) fit 
into this media feature-based scheme or not?


transcription: want | dont-want | either (default: either)

c) How about screening out visual content dangerous to those with 
photosensitive epilepsy, an problem that has just made headlines in the UK:


http://news.bbc.co.uk/2/hi/uk_news/england/london/6724245.stm

Perhaps:

max-flashes-per-second:  | any (default: 3)

Where the UA must not show visual content if the user is selecting for a 
lower number of flashes per second. By default UAs should be configured 
not to display content which breaches safety levels; the default value 
should be 3 /not/ any.


Compare:

http://www.w3.org/TR/2007/WD-WCAG20-TECHS-20070517/Overview.html#G19

d) Facilitating people with cognitive disabilities within a media query 
framework is trickier. Some might prefer content which has been stripped 
down to simple essentials. Some might prefer content which has extra 
explanations. Some might benefit from a media query based on reading 
level. Compare the discussion of assessing readability levels at:


http://juicystudio.com/services/readability.php

reading-level:  | basic | average | complex | any (default: any)

Where the integer would be how many years of schooling it would take an 
average person to understand the content: basic could be (say) 9, 
average could be 12, and complex could be 17 (post-graduate).


This wouldn't be easily testable, but it might be useful nevertheless.

Postscript: This isn't an accessibility issue but /if/ media queries are 
adopted as a mechanism for serving up the best content for a person's 
abilities, I wonder if they could also be used to enhance parental 
control systems using queries based on PICS:


http://www.w3.org/PICS/

So for example, one  might have a music video featuring 
uncensored swearing, and another  might have the same video with 
the swearing beeped out.


--
Benjamin Hawkes-Lewis


[whatwg] accessibility management for timed media elements, proposal

2007-06-08 Thread Dave Singer

Hi

we promised to get back to the whatwg with a proposal for a way to 
handle accessibility for timed media, and here it is.  sorry it took 
a while...


* * * * *


To allow the UA to select among alternative sources for media 
elements based on users' accessibility preferences, we propose to:


1) Expose accessibility preferences to users
2) Allow the UA to evaluate the suitability of content for specific 
accessibility needs via CSS media queries



Details:

1) Expose accessibility preferences to users

Proposal: user settings that correspond to a accessibility needs. For 
each need, the user can choose among the following three dispositions:


  * favor (want): I prefer media that is adapted for this kind of 
accessibility.
  * disfavor (don't want): I prefer media that is not adapted for 
this kind of accessibility.
  * disinterest (don't care): I have no preference regarding this 
kind of accessibility.


The initial set of user preferences for consideration in the 
selection of alternative media resources correspond to the following 
accessibility options:


  captions (corresponds to SMIL systemCaptions)
  descriptive audio (corresponds to SMIL systemAudioDesc)
  high contrast video
  high contrast audio (audio with minimal background noise, music 
etc., so speech is maximally intelligible)


This list is not intended to be exhaustive; additional accessibility 
options and corresponding preferences may be considered for inclusion 
in the future.


Herein we describe only those user preferences that are useful in the 
process of evaluating multiple alternative media resources for 
suitability. Note that these proposed preferences are not intended to 
exclude or supplant user preferences that may be offered by the UA to 
provide accessibility options according to the W3C accessibility 
guidelines, such as a global volume control 
.



2) Allow the UA to evaluate the suitability of content for specific 
accessibility needs via CSS media queries


Note that the current specification of  and  includes a 
mechanism for selection among multiple alternate resources 
. The 
scope of our proposal here is to extend that mechanism to cover 
accessibility options.


Proposal: the media attribute of the  element as described in 
the current working draft of Web Applications 1.0 takes a CSS media 
query as its value , which 
the UA will evaluate in the process of selecting an appropriate media 
resource for presentation. To extend the set of media features that 
can be queried to include accessibility preferences, we define a new 
media feature for each supported accessibility preference:


  captions
  descriptive-audio
  high-contrast-video
  high-contrast-audio

For each of these media features the following values are defined:

  * The user prefers media adapted for this kind of accessibility (": want").
  * The user prefers media that is not adapted for this kind of 
accessibility (": dont-want").
  * The user has expressed no preference regarding this kind of 
accessibility (": either").


For each media feature that corresponds to accessibility preferences, 
an expression evaluates to FALSE if and only if the user has an 
explicit preference (want or don't want), and the media feature has a 
value of want or dont-want that doesn't correspond.  For all other 
combinations (user disinterest or a value of "either"), then the 
expression evaluates to TRUE.


Example. If the user has asked for
  captions:  want
  high contrast video:  don't want

and the video element has

  
  


The second source will be selected for presentation; the second would 
also be selected if the media attribute were completely omitted.


Once a candidate source has been selected, the UA must attempt to 
apply the user's accessibility preferences to its presentation, so 
that adaptable content is presented appropriately.



--
David Singer
Apple/QuickTime