Re: [whatwg] eventsource/RemoteEventSource wierdness
On 17.2.09 22:53, Jonas Sicking wrote: This could also replace the IMHO awkwardeventsource element. I don't understand the value of having this element at all. It seems to me that if the only way you can use an API is through script, then making the API into an element is adding extra complexity to the HTML language for little to no gain. I seem to recall reading once that it's not the case that the only way you can use the API is through script -- sort of. At one time I believe the intent was that an onmessage attribute on body would allow you to have handlers without needing to run script to set them. You would of course still need script to execute for the handler to run. That said, I don't think that reason is at all compelling. As far as any list of features to cut (spin into other specifications) goes, I would rate this one fairly high on it, particularly if the element API were scrapped in favor of a pure-script API. There are interactions with the current task-queueing mechanism in HTML5, to be sure, but eventsource is mostly a consumer of those mechanisms, not a contributory component of it. I don't think eventsource's removal would affect the continuing evolution of the queueing mechanism in any meaningful way. Jeff
Re: [whatwg] eventsource/RemoteEventSource wierdness
On Thu, Feb 19, 2009 at 3:10 AM, Jeff Walden jwalden+wha...@mit.edu wrote: On 17.2.09 22:53, Jonas Sicking wrote: This could also replace the IMHO awkwardeventsource element. I don't understand the value of having this element at all. It seems to me that if the only way you can use an API is through script, then making the API into an element is adding extra complexity to the HTML language for little to no gain. I seem to recall reading once that it's not the case that the only way you can use the API is through script -- sort of. At one time I believe the intent was that an onmessage attribute on body would allow you to have handlers without needing to run script to set them. You would of course still need script to execute for the handler to run. Exactly, it's the fact that you ultimately always have to forward to a script to handle the event that I was referring to. This isn't 100% true though. As I understand it, the idea is to allow support for firing other event types than 'message', at which point I *think* you could do things like trigger SMIL animations and run XForms actions without resorting to javascript. However neither of these things are practically supported by the spec now, so I don't think that's an argument for keeping the current design. And it still doesn't explain why you'd want addEventSource on XMLHttpRequest, WebSocket or Window. / Jonas
Re: [whatwg] Captions, Subtitles and the Video Element
Instead of * getCaptionList(): returns an array of caption elements. Have * getCaptions(): returns an array of caption elements.
Re: [whatwg] Captions, Subtitles and the Video Element
Greg, I think that it's important that there be something that everyone can depend on being present across all UAs, so I commend your dedication here. I think adding subrip as a baseline is a great idea so that everyone knows that there's something that works everywhere, and subrip is dead simple. However, there's a lot of uses for subtitles / captions that cannot be met with subrip. No styling (beyond the bare basics), no karaoke commands, no alpha, no nice handling for collisions, margins, shadow colors, specifying encoding, etc. Without meeting these needs there's a number of people who will just ignore video as they don't have something that will meet their needs in all UAs. As long as we're specifying some base set of standards that need to be supported, you might as well pick one of the more full featured formats as well. Personally I would suggest SSAv4+ (Advanced SubStation Alpha). I don't want to get into religious wars over which is best, but the reality is that it's in wide use and there's a number of tools for working with it. You only get one chance to set a baseline standard, might as well make sure that it covers all the use cases. On Thu, Feb 19, 2009 at 2:37 PM, Greg Millam mil...@google.com wrote: Hi guys - I'm one of the main engineers responsible for captioning support on YouTube, and I've joined the Chrome team at Google to attempt to help drive video captions and subtitling forward: Both to implement support in Chrome for it, and to push for HTML5 support for captions. In my following statements, I am working off of a search through the mailing list and reading of the HTML 5 spec. Particularly where the Video tag is concerned. If there are any factual errors, or I'm way off, just point my way. All this is as far as I can discover. The current state of accessibility and captions in HTML5 has been relegated to http://wiki.whatwg.org/wiki/Video_accessibility - a wiki page with use cases, requirements, existing solutions, and an empty Proposed Solutions category. I aim to fix that. My main goal here is to prevent captioning from missing out on HTML5 and being dropped because we never got around to it. (a la HDMI) Here is my proposal: Use cases: * Accessibility. * Ability to audiences in other languages. Goals: * Allow movie formats to include captioning support. * Make it simple for an author to create and publish transcripts, without requiring them to embed it into the movie. * Make it simple for caption or subtitle tracks to be accessible. * Allow full javascript control: List, add, delete, and create caption tracks. * Provide a required format to act as a baseline across all browsers. The current state of the video element includes support for defining a source video file, local or remote. There is no method to define a caption source or track. Proposed Solution: HTML5 / Page Author: * Each video will have a list of zero or more Timed Text tracks. * A track has three variables for selection: Type, Language, Name. These can be null, except for name. * Type is a string, and may be (but is not limited to): Caption Transcript Translation Subtitles, etc. Others can be defined by the user (e.g: Commentary User Comments). * Language is a language code (en, es, pt_BR, etc) * Name is a freeform text identifier. By default, default or caption. If a video file has multiple tracks, they are added as caption1 caption2, etc. * video . . . /video is not necessarily a standalone tag. If the author desires, they can add more elements to define tracks. Whether this should be caption type=format src=... media=caption or source type=timedtext/format src=... can vary. (I prefer caption as it's more explicit). * caption src=foo.srt type=caption language=en name=default / adds a new caption. caption is standalone. * All timed text tracks encoded in the video file are added to the list, as an implicit caption element. * Caption tags, when displayed, count as span class=caption.../span unless they have style associated with them (uncommon). So they can be tweaked via CSS. Whether by the author or overridden by useragent. User Agent: * Implements support for caption tag. interface MediaCaptionElement : HTMLElement { attribute DOMString src; attribute DOMString format; // default: auto. attribute DOMString type; attribute DOMString language; attribute DOMString name; attribute DOMBoolean enabled; }; * Media elements now have a list of Captions associated with it. * Support for (at minimum) Subrip format. Subrip I choose here for the same reason we picked it for YouTube: It's readable, understandable, and simple. You can create one with your favorite editor. Subrip has no style associated with individual captions, so can be subject to CSS caption rules for SPAN.caption * Support for other formats (608, 708, .ass, dfxp, etc) up to
Re: [whatwg] Captions, Subtitles and the Video Element
Greg - Interesting ideas! A few questions that occur to me on first read: On Feb 19, 2009, at 2:37 PM, Greg Millam wrote: HTML5 / Page Author: * Each video will have a list of zero or more Timed Text tracks. * A track has three variables for selection: Type, Language, Name. These can be null, except for name. I am confused by your terminology. Does Timed Text track refer to the caption elements, or the caption tracks in the media file, or both? The term Time Text track has a very specific meaning in a media file, so unless that is what you mean I think another term would be preferable. * All timed text tracks encoded in the video file are added to the list, as an implicit caption element. When should the UA create the implicit caption element(s) from the tracks in the media file? What should it do about caption samples that are spread throughout the media file? * Caption tags, when displayed, count as span class=caption.../span unless they have style associated with them (uncommon). So they can be tweaked via CSS. Whether by the author or overridden by useragent. So by default, all of the captions (along with number and time stamps) for the entire file are displayed at the same time? eric
Re: [whatwg] Can AUDIO/VIDEO element have a balance attribute
On Sat, 15 Nov 2008, Biju g...@il wrote: We need a balance property for AUDIO/VIDEO which can be accessed thru javascript Example:- In http://www.fishtank.me/ if we have the balance property we can give an audible effect of bubble being appearing at different sides of the screen. With out the balance property to give the similar effect, one have to create sound files with different balance at server and bring it to the client, which will be a waste of band width. On Sun, 16 Nov 2008, Biju g...@il wrote: It can be educational or game sites for kids. Like http://www.starfall.com/ http://pbskids.org/ Some where at these sites I saw a flash tool to build music. So having control on balance can enhance a music composing training page.. One of thing a blind person cant do is using mouse on computer. Because mouse need feed back through eye to use it. (unlike old technologies like Digitizer Stylus, Light Pen etc) I have a little ambitious idea, We can help blind person to use mouse if we have ability to control balance and volume. ie, * 1 audio source and balance can give left and right position on screen. * 2 audio source, one to indicate top and other for bottom along with balance and volume control can give 2D positioning on screen. And 3 audio source, third one for depth can give 3D positioning. Which solves the zooming effect on google map. On Sun, 16 Nov 2008, Philip J�genstedt wrote: While I don't strongly object to the suggestion as such, there are 2 things I want to point out. 1. It is not clear (to me at least) what balance means for anything by mono/stereo audio. What is the expected behaviour for 5.1 audio? 2. Balance is just one kind of audio filter effect. The more filter features are added the tougher the requirements on the user agent and the media framework backend they use will be. Is there a use case for having the same LARGE media file played back with different balance settings? All the obvious use cases would be short sound effects where creating duplicate versions of the files might be a better trade-off than introducing more complexity in the API. I think that this is the kind of thing we will want to expose in the future, especially, e.g., in conjunction with a 3D Canvas API. I've noted this as a feature we will want to add in a future version of the audio API, but I don't think we should add it yet since we're still waiting for the browser vendors to ship a solid implementation of the API as it stands today. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Captions, Subtitles and the Video Element
On Feb 20, 2009, at 00:37, Greg Millam wrote: The current state of accessibility and captions in HTML5 has been relegated to http://wiki.whatwg.org/wiki/Video_accessibility - a wiki page with use cases, requirements, existing solutions, and an empty Proposed Solutions category. Since then, the active work has moved to the Mozilla wiki and to Xiph: https://wiki.mozilla.org/Special:Search?search=captions http://lists.xiph.org/pipermail/accessibility/ http://wiki.xiph.org/index.php/Timed_Divs_HTML Silvia Pfeiffer has been working on this as a Mozilla Foundation grantee. * video . . . /video is not necessarily a standalone tag. If the author desires, they can add more elements to define tracks. Whether this should be caption type=format src=... media=caption or source type=timedtext/format src=... can vary. (I prefer caption as it's more explicit). FWIW, you can't use the element name caption for legacy reasons. You can't use the element name text, since that would introduce new name collisions with SVG 1.1. * Support for (at minimum) Subrip format. Subrip I choose here for the same reason we picked it for YouTube: It's readable, understandable, and simple. You can create one with your favorite editor. Subrip has no style associated with individual captions, so can be subject to CSS caption rules for SPAN.caption I agree it makes sense to start with something simple. The markupless flavor of SRT would be such a format. However, supporting the formatting tags in later flavors of SRT is a can of worms: You'd quickly end up introducing a third HTML/XML-like parser into the browser. Further, the formatted flavors of SRT have become victims of the same problem that the RSS title became a victim of. Let's not go there. For formatted captions, I think it makes sense to overlay a browsing context onto the video and make HTML/CSS-based captions render into that browsing context on the main thread (tolerating some timing jitter relative to the video track). http://wiki.xiph.org/index.php/Timed_Divs_HTML is a proposal to this direction, but it lacks a concrete processing model proposal at present. * Support for other formats (608, 708, .ass, dfxp, etc) up to the user agent. (But preferred!) DFXP reinvents a lot of stuff that browsers already implement in their CSS formatter. From a browser code reuse point of view, it makes more sense to use HTML+CSS. -- Henri Sivonen hsivo...@iki.fi http://hsivonen.iki.fi/