Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Robert O'Callahan
On Wed, Jul 13, 2011 at 12:00 PM, Aaron Colwell  wrote:

> On Tue, Jul 12, 2011 at 4:44 PM, Robert O'Callahan 
> wrote:
>
>> I had imagined that this API would let the author feed in the same data as
>> you would load from some URI. But that can't be what's happening, since in
>> some element implementations (e.g., Gecko's) loaded data is buffered
>> internally and seeking might not require any new data to be loaded.
>>
>>
>  No. The idea is to allow JavaScript to manage fetching the media data so
> various fetching strategies could be implemented without needing to change
> the browser. My initial motivation is for supporting adaptive streaming with
> this mechanism, but I think various media mashup and delivery scenarios
> could be explored with this.
>

I don't think you can do that with this API without making huge assumptions
about what the browser's demuxer, internal caching, etc are doing.

Rob
-- 
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Aaron Colwell
On Tue, Jul 12, 2011 at 4:44 PM, Robert O'Callahan wrote:

> On Wed, Jul 13, 2011 at 11:30 AM, Aaron Colwell wrote:
>
>> I'm doing WebM demuxing and media fetching in JavaScript. When a seek
>> occurs, I look at currentTime to see where we are seeking to. I then look at
>> the CUES index data I've fetched to find the file offset for the closest
>> seek point to the desired time. The appropriate data is fetched and pushed
>> into the element via append(). The seeked event firing and readyState
>> transitioning to HAVE_FUTURE_DATA or HAVE_ENOUGH_DATA tells me when I've
>> sent the element enough data. During playback I just monitor the buffered
>> attribute to keep a specific duration ahead of the current playback time.
>
>
> Now I'm rather confused about what you're doing and how you're using this
> feature. What format is the data that you're feeding into the element?
>

Sorry I wasn't clear about my intent. Currently I'm feeding it WebM. I could
see this expanding to Ogg and perhaps MP4. Theoretically any format that
looks like a packet stream could work.


>
> I had imagined that this API would let the author feed in the same data as
> you would load from some URI. But that can't be what's happening, since in
> some element implementations (e.g., Gecko's) loaded data is buffered
> internally and seeking might not require any new data to be loaded.
>
>
 No. The idea is to allow JavaScript to manage fetching the media data so
various fetching strategies could be implemented without needing to change
the browser. My initial motivation is for supporting adaptive streaming with
this mechanism, but I think various media mashup and delivery scenarios
could be explored with this.

Aaron


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Robert O'Callahan
On Wed, Jul 13, 2011 at 11:30 AM, Aaron Colwell  wrote:

> I'm doing WebM demuxing and media fetching in JavaScript. When a seek
> occurs, I look at currentTime to see where we are seeking to. I then look at
> the CUES index data I've fetched to find the file offset for the closest
> seek point to the desired time. The appropriate data is fetched and pushed
> into the element via append(). The seeked event firing and readyState
> transitioning to HAVE_FUTURE_DATA or HAVE_ENOUGH_DATA tells me when I've
> sent the element enough data. During playback I just monitor the buffered
> attribute to keep a specific duration ahead of the current playback time.


Now I'm rather confused about what you're doing and how you're using this
feature. What format is the data that you're feeding into the element?

I had imagined that this API would let the author feed in the same data as
you would load from some URI. But that can't be what's happening, since in
some element implementations (e.g., Gecko's) loaded data is buffered
internally and seeking might not require any new data to be loaded.

Rob
-- 
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Aaron Colwell
On Tue, Jul 12, 2011 at 4:17 PM, Robert O'Callahan wrote:

> On Wed, Jul 13, 2011 at 11:14 AM, Aaron Colwell wrote:
>
>>
>> I'm open to that. In fact that is how my current prototype is implemented
>> because it was the least painful way to test these ideas in WebKit. My
>> prototype only implements append() and uses existing media element events as
>> proxies for the events I've proposed. I only separated this out into a
>> separate object because I thought people might prefer an object to represent
>> the source of the media and leave the media element object an endpoint for
>> controlling media playback.
>>
>
> We're kinda stuck with media elements handling both playback endpoints and
> resource loading.
>

Ok.  This makes implementation in WebKit easier for me so I won't push to
hard to keep it separate from the media element. :)


>
>
>>
>>> Do you need to support seeking in with this API? That's hard. It would be
>>> simpler if we didn't have to support seeking. Instead of seeking you could
>>> just open a new stream and pour data in for the new offset.
>>>
>>
>>  I'd like to be able to support seeking so you can use this mechanism for
>> on-demand playback. In my prototype seeking wasn't too difficult to
>> implement. I just triggered it off the seeking event. Any append() that
>> happens after the seeking event fires is associated with the new seek
>> location. currentTime is updated with the timestamp in the first cluster
>> passed to append() after the seeking event fires. Once the media engine has
>> this timestamp and enough preroll data, then it will fire the seeked event
>> like normal. I haven't tested this with rapid fire seeking yet, but I think
>> this mechanism should work.
>>
>
> How do you communicate the data offset that the element wants to read at
> over to the script that provides the data? In general you can't know the
> strategy the decoder/demuxer uses for seeking, so you don't know what data
> it will request.
>

I'm doing WebM demuxing and media fetching in JavaScript. When a seek
occurs, I look at currentTime to see where we are seeking to. I then look at
the CUES index data I've fetched to find the file offset for the closest
seek point to the desired time. The appropriate data is fetched and pushed
into the element via append(). The seeked event firing and readyState
transitioning to HAVE_FUTURE_DATA or HAVE_ENOUGH_DATA tells me when I've
sent the element enough data. During playback I just monitor the buffered
attribute to keep a specific duration ahead of the current playback time.

Aaron


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Robert O'Callahan
On Wed, Jul 13, 2011 at 11:14 AM, Aaron Colwell  wrote:

>
> I'm open to that. In fact that is how my current prototype is implemented
> because it was the least painful way to test these ideas in WebKit. My
> prototype only implements append() and uses existing media element events as
> proxies for the events I've proposed. I only separated this out into a
> separate object because I thought people might prefer an object to represent
> the source of the media and leave the media element object an endpoint for
> controlling media playback.
>

We're kinda stuck with media elements handling both playback endpoints and
resource loading.


>
>> Do you need to support seeking in with this API? That's hard. It would be
>> simpler if we didn't have to support seeking. Instead of seeking you could
>> just open a new stream and pour data in for the new offset.
>>
>
>  I'd like to be able to support seeking so you can use this mechanism for
> on-demand playback. In my prototype seeking wasn't too difficult to
> implement. I just triggered it off the seeking event. Any append() that
> happens after the seeking event fires is associated with the new seek
> location. currentTime is updated with the timestamp in the first cluster
> passed to append() after the seeking event fires. Once the media engine has
> this timestamp and enough preroll data, then it will fire the seeked event
> like normal. I haven't tested this with rapid fire seeking yet, but I think
> this mechanism should work.
>

How do you communicate the data offset that the element wants to read at
over to the script that provides the data? In general you can't know the
strategy the decoder/demuxer uses for seeking, so you don't know what data
it will request.

Rob
-- 
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Aaron Colwell
On Tue, Jul 12, 2011 at 3:28 PM, Robert O'Callahan wrote:

> On Wed, Jul 13, 2011 at 8:45 AM, Aaron Colwell wrote:
>
>> I thought about adding an attribute to HTMLMediaElement that provided a
>> URL for signalling MediaSource usage. That mechanism would allow you to
>> create a URL that only works with that element. When this URL is specified,
>> a MediaSource attribute would be updated on the media element during loading
>> and JavaScript could use that to pass data to the tag. I couldn't find a
>> similar pattern in other APIs so I didn't take that path. If people think
>> that is a better route then I'm all for it.
>>
>
> I was thinking more of putting the MediaSource functionality
> (open/append/close) on the media element itself.
>

I'm open to that. In fact that is how my current prototype is implemented
because it was the least painful way to test these ideas in WebKit. My
prototype only implements append() and uses existing media element events as
proxies for the events I've proposed. I only separated this out into a
separate object because I thought people might prefer an object to represent
the source of the media and leave the media element object an endpoint for
controlling media playback.


>
> Do you need to support seeking in with this API? That's hard. It would be
> simpler if we didn't have to support seeking. Instead of seeking you could
> just open a new stream and pour data in for the new offset.
>

 I'd like to be able to support seeking so you can use this mechanism for
on-demand playback. In my prototype seeking wasn't too difficult to
implement. I just triggered it off the seeking event. Any append() that
happens after the seeking event fires is associated with the new seek
location. currentTime is updated with the timestamp in the first cluster
passed to append() after the seeking event fires. Once the media engine has
this timestamp and enough preroll data, then it will fire the seeked event
like normal. I haven't tested this with rapid fire seeking yet, but I think
this mechanism should work.

Aaron


Re: [whatwg] AppCache-related e-mails

2011-07-12 Thread Karl Dubost

Le 29 juin 2011 à 05:27, Felix Halim a écrit :
> Suppose the content of the main page change very often (like news site).
> In this case, you don't want to cache the main page since the users
> want to see the latest main page, not the cached ones when they open
> the main page later.

Did you also check ESI?
http://www.w3.org/TR/esi-lang

For example in 
http://symfony.com/doc/2.0/book/http_cache.html#edge-side-includes

-- 
Karl Dubost - http://dev.opera.com/
Developer Relations & Tools, Opera Software



Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Robert O'Callahan
On Wed, Jul 13, 2011 at 8:45 AM, Aaron Colwell  wrote:

> I thought about adding an attribute to HTMLMediaElement that provided a URL
> for signalling MediaSource usage. That mechanism would allow you to create a
> URL that only works with that element. When this URL is specified, a
> MediaSource attribute would be updated on the media element during loading
> and JavaScript could use that to pass data to the tag. I couldn't find a
> similar pattern in other APIs so I didn't take that path. If people think
> that is a better route then I'm all for it.
>

I was thinking more of putting the MediaSource functionality
(open/append/close) on the media element itself.

Do you need to support seeking in with this API? That's hard. It would be
simpler if we didn't have to support seeking. Instead of seeking you could
just open a new stream and pour data in for the new offset.

Rob
-- 
"If we claim to be without sin, we deceive ourselves and the truth is not in
us. If we confess our sins, he is faithful and just and will forgive us our
sins and purify us from all unrighteousness. If we claim we have not sinned,
we make him out to be a liar and his word is not in us." [1 John 1:8-10]


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Aaron Colwell
Hi Harald,

Please point me to specific threads that talk about this. I looked through
the public-web...@w3.org archive and didn't see anything about interactive
media handling. I did look through the Mozilla/Cisco proposal
thread
and
didn't see anything in my proposal that is incompatible with what is being
proposed there.

Aaron

On Tue, Jul 12, 2011 at 12:31 AM, Harald Alvestrand wrote:

> Not a comment directly on the spec, but you might want to check what people
> are suggesting for interactive media handling in the WEBRTC working group.
>
> Streaming is different from interactive media, but it would be a shame to
> have incompatibilities that can be avoided.
>
>
>


Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Aaron Colwell
On Mon, Jul 11, 2011 at 5:54 PM, Robert O'Callahan wrote:

> It seems to me that the spec is written assuming only one media element is
> consuming the MediaSource. But nothing stops multiple elements consuming the
> same URL simultaneously. Maybe instead of going through a URL you should add
> API directly to media elements.
>

You are right that I don't have anything preventing the MediaSource URL from
being passed to multiple media elements. Only one media element will accept
the URL though because whichever one opens the URL first will transition the
source to the OPEN state. Media elements can only open sources in the CLOSED
state. I'm using a URL for initialization to be consistent with how the
media element is initialized in all other cases. I didn't want to create a
new initialization path.

I thought about adding an attribute to HTMLMediaElement that provided a URL
for signalling MediaSource usage. That mechanism would allow you to create a
URL that only works with that element. When this URL is specified, a
MediaSource attribute would be updated on the media element during loading
and JavaScript could use that to pass data to the tag. I couldn't find a
similar pattern in other APIs so I didn't take that path. If people think
that is a better route then I'm all for it.



> bytesAvailable is for flow control? Instead of doing it this way, I would
> follow WebSockets and use a bufferedAmount attribute to indicate how much
> data is currently buffered up. That makes it easy for authors who don't want
> to care about flow control to just append stuff without encountering errors,
> while still allowing authors who care about flow control to do it.
>
>
Yes. The intent was to provide a way for the browser to control how much
data was being pushed into it. It looks like WebSocket will just close the
connection if it doesn't have enough buffer space and the API doesn't appear
to provide a mechanism to predict how much buffered data will trigger a
close. Do we want similar semantics for media? It seems like the browser
should provide some hints to indicate that it is not ok to push hours/days
of data into this interface.

Thanks for your comments.

Aaron


Re: [whatwg] Microdata feedback

2011-07-12 Thread Ian Hickson
On Tue, 12 Jul 2011, Henri Sivonen wrote:
> On Thu, 2011-07-07 at 22:33 +, Ian Hickson wrote:
> > The JSON algorithm now ends the crawl when it hits a loop, and 
> > replaces the offending duplicate item with the string "ERROR".
> > 
> > The RDF algorithm preserves the loops, since doing so is possible with 
> > RDF. Turns out the algorithm almost did this already, looks like it 
> > was an oversight.
> 
> It seems to me that this approach creates an incentive for people who 
> want to do RDFesque things to publish deliberately non-conforming 
> microdata content that works the way they want for RDF-based consumers 
> but breaks for non-RDF consumers. If such content abounds and non-RDF 
> consumers are forced to support loopiness but extending the JSON 
> conversion algorithm in ad hoc ways, part of the benefit of microdata 
> over RDFa (treeness) is destroyed and the benefit of being well-defined 
> would be destroyed, too, for non-RDF consumption cases.

The "problem" here is that RDF and microdata have different data models, 
and RDF cannot represent microdata's data model with fidelity.

For example, consider how this converts to RDF and compare it to the 
microdata equivalent:

   http://example.com/"; itemid="http://example.com/1";>
x
   
   http://example.com/"; itemid="http://example.com/1";>
x
   

There are other things RDF can't represent easily, e.g. it cannot easily 
represent the order of the values in this item:

   http://example.com/";>
1
2
   

As such, I suggest we not worry about the itemref="" loop case, or that we 
try to fix all these cases together (not sure how we'd fix them).

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Should events be paused on detached iframes?

2011-07-12 Thread Boris Zbarsky

On 6/13/11 8:09 PM, Ian Hickson wrote:

It's possible to switch these relevant checks to walk the ownerDocument
chain instead, say.  Then we need to audit all the callsites to make
sure this makes sense at them and figure out what to do for the ones
where it doesn't.  (For example, should window.alert on the window of an
iframe not in the DOM put up a dialog in a tab based on the
ownerDocument of the iframe?  Or not put one up at all?)


It should put it up in the context of the top-level browsing context of
the script that led to that point (the "first script"). This is the same
as if someone in one tab calls another tab's script and that script calls
alert().


Is that last what browsers actually do?  I'm pretty sure that's not what 
Gecko does...



Note that only direct script invokations would work here. setTimeout,
events, XHR callbacks, etc, don't run while the document is not active. (I
had previously set that dispatchEvent() would work, but this is incorrect
per the spec at the moment. My apologies.)


OK.


There are quite a few APIs that need to be thus audited if this
invariant is changed.


Are there any I should look for off-hand?


You listed some above.

There are also issues in terms of network loads that are live when an 
iframe is removed from its document, whether network loads can _start_ 
in such a removed iframe, what the styling behavior, if any, is (e.g. 
how should media query matching work?), layout behavior, if any (what's 
the initial containing block size?).  What should happen if click() is 
called on anchors?  Or is that covered by the events thing above?


Basically, pretty much every single aspect of the platform's behavior 
needs to be sanity-checked in this context...



There are, yes.  There are also lots of edge cases that are otherwise
impossible that are introduced by allowing it; I'm a little curious as
to how compatible with each other the IE8 and Chrome implementations
are.


I agree that this is an area that might well be minimally interoperable at
the moment. That, of course, is the main reason to specify it. :-)


That's fine, but it seems like a v2 kind of feature to me.

I'm also a little saddened that there has been absolutely no feedback so 
far from the people who've been implementing this, even in cases when 
the current spec doesn't really cover behavior  We're not going to 
get to interop that way.


-Boris



Re: [whatwg] Microdata feedback

2011-07-12 Thread Philip Jägenstedt

On Tue, 12 Jul 2011 09:41:18 +0200, Henri Sivonen  wrote:


On Thu, 2011-07-07 at 22:33 +, Ian Hickson wrote:

The JSON algorithm now ends the crawl when it hits a loop, and replaces
the offending duplicate item with the string "ERROR".

The RDF algorithm preserves the loops, since doing so is possible with
RDF. Turns out the algorithm almost did this already, looks like it was  
an

oversight.


It seems to me that this approach creates an incentive for people who
want to do RDFesque things to publish deliberately non-conforming
microdata content that works the way they want for RDF-based consumers
but breaks for non-RDF consumers. If such content abounds and non-RDF
consumers are forced to support loopiness but extending the JSON
conversion algorithm in ad hoc ways, part of the benefit of microdata
over RDFa (treeness) is destroyed and the benefit of being well-defined
would be destroyed, too, for non-RDF consumption cases.


I don't have a strong opinion, but note that even before this change the  
algorithm produced a non-tree for the "Avenue Q" example [1] where the  
"adr" property is shared between two items using itemref. (In JSON, it is  
flattened.) If we want to ensure that RDF consumers don't depend on  
non-treeness, then this should change as well.


[1]  
http://www.whatwg.org/specs/web-apps/current-work/multipage/microdata.html#examples-4


--
Philip Jägenstedt
Core Developer
Opera Software


Re: [whatwg] The blockquote element spec vs common quoting practices

2011-07-12 Thread Bjartur Thorlacius

Þann þri 12.júl 2011 09:15, skrifaði Oli Studholme:

Firstly thank you (and you Jeremy!) for your input. This thread will
help decide how the blockquote spec changes to accommodate the use
cases I outlined, so the more input the better.


Thank you for your commentary, it is most appreciated.


On Tue, Jul 12, 2011 at 2:52 AM, Bjartur Thorlacius
  wrote:

I'm not arguing against rendering attribution. On
the contrary, IMO user agents should render at least the title of the cited
resource.


This is a can of worms as authors will want control over both content
and style. Attributes turned into content are harder to style than
content. Also attributes tend to be for either humans (@alt) or
machines (@datetime), so displaying attributes (for humans) that
contain data (for machines) generally gives bad results.


Datetimes will usually be presented in a localized format to humans.


In the print use cases I found, sometimes attribution is inline after
the last sentence and sometimes on a following line. This is in
addition to having attribution in the prose surrounding the block
quote, as currently recommended by the spec. How would the user agent
know which way the author wants to present attribution?

By fetching and reading a linked stylesheet. I think it's easier to 
style attributes then text nodes polluted with delimiters such as "from" 
and "by" that make reordering hard.
More importantly, how is the author to know how the user wants 
attribution presented?



Again I have no idea how a user agent would follow these rules.
Arbitrarily showing one thing in one viewport size and something else
at a different size would be a bug (arbitrarily meaning without
author/user intervention, such as via CSS).
A feature to one, a bug to another. The existence of the CSS height and 
width media features suggests that catering style to varying viewport 
sizes is desired by others than just me. I don't see why a user agent 
should seek an authors' permission to style a document for an unusually 
sized viewport, nor require users to write their own stylesheets instead 
of shipping customizable stylesheets.



Love your phrase
“superfluous screen space” btw ;)

:P


It's simply a question of

Lorem ipsum

Bjartur
onthe second April, 1997


vs

Lorem ipsum



You've got two additional problems in your example:
* currently only the,  and  elements accept the
datetime attribute, and this isn't even a valid datetime value (you
wanted 1997-04-02)
Ops, you're correct; this should've been 1997-04-02. I'm proposing 
adding a datetime attribute to .



* the cite attribute must be a valid URL, and is for providing a link
to more information about the quote (generally its source) – you can't
use it for non-URL data
For a lack of a valid URI identifying myself, I used an unregistered 
uri-scheme ("kennitala") and my national ID as the scheme-specific part. 
The exact URI in question is unimportant to the example, but I see no 
reason to restrict values of cite to locators only, as opposed to 
identifiers in general. Quoting books identified by ISBN numbers seems 
like a good enough use case to me.



This proves Jeremy's earlier point about attributes being a bad place
to store data. Unless you look at the source you’d never notice these
mistakes.

Sure I would, had I actually tried to, say, render them or validate 
before posting them on the Internet. I refrained from doing so as I knew 
this to be invalid markup, anyway. Where datetime to be a valid 
attribute of blockquote



I also note that your  example contains a lot more content,
the visible part being “Bjartur on the second April, 1997”. A
potential rendering of the attributes in your second example would
probably be something like “Bjartur Thorlacius 1997-04-02", which I
think isn’t as good. This refers to my first point about authors
wanting to control the content.


No, that would be quite an odd rendering. More likely renderings:

Þann annan apríl 1997 skrifaði Bjartur Thorlacius:
> Lorem ipsum


On the second April, 1997 Bjartur Thorlacius wrote:
> Lorem ipsum

“ Lorem ipsum
— Bjartur Thorlacius

It all depends on the user's localized stylesheet.

Note that a datetime in a  element would have to parsed just as 
date in a datetime attribute of a . They're both machine 
readable (and that's the best way to internationalize dates).

Finally two other strikes against attributes are they're harder for
people learning HTML (which is one reason we have  over
role="section" etc), and we already have three (I’d argue) perfectly
good elements for the data you are suggesting adding via attributes:
*  for following-line attribution and notes
*  for datetime information
*  and  for citation information

I've never heard this argument before. I thought we had  rather 
than  because the latter has a high 
noise-to-information ratio, leading to overly verbose constructs such as:



[content]
 


There’s also the possibility of adding another inline element, s

Re: [whatwg] The blockquote element spec vs common quoting practices

2011-07-12 Thread Oli Studholme
Hi Bjartur,

Firstly thank you (and you Jeremy!) for your input. This thread will
help decide how the blockquote spec changes to accommodate the use
cases I outlined, so the more input the better.

On Tue, Jul 12, 2011 at 2:52 AM, Bjartur Thorlacius
 wrote:
> I'm not arguing against rendering attribution. On
> the contrary, IMO user agents should render at least the title of the cited
> resource.

This is a can of worms as authors will want control over both content
and style. Attributes turned into content are harder to style than
content. Also attributes tend to be for either humans (@alt) or
machines (@datetime), so displaying attributes (for humans) that
contain data (for machines) generally gives bad results.

> Interactive user agents should additionally make the cited
> resource available in manner similar to how they present other hyperlinked
> resources.

In the print use cases I found, sometimes attribution is inline after
the last sentence and sometimes on a following line. This is in
addition to having attribution in the prose surrounding the block
quote, as currently recommended by the spec. How would the user agent
know which way the author wants to present attribution?

> Additionally user agents with superfluous screen space may render
> the datetime. Handheld renderings should of course not display the datetime
> without user interaction, but reserve the screen estate for more critical
> information, such as the quotation itself.

Again I have no idea how a user agent would follow these rules.
Arbitrarily showing one thing in one viewport size and something else
at a different size would be a bug (arbitrarily meaning without
author/user intervention, such as via CSS). Love your phrase
“superfluous screen space” btw ;)

> It's simply a question of
> 
>        Lorem ipsum
> 
> Bjartur
> on the second April, 1997
> 
> 
> vs
>  cite="kennitala:2112952019">
>        Lorem ipsum
> 

You've got two additional problems in your example:
* currently only the ,  and  elements accept the
datetime attribute, and this isn't even a valid datetime value (you
wanted 1997-04-02)
* the cite attribute must be a valid URL, and is for providing a link
to more information about the quote (generally its source) – you can't
use it for non-URL data
This proves Jeremy's earlier point about attributes being a bad place
to store data. Unless you look at the source you’d never notice these
mistakes.

I also note that your  example contains a lot more content,
the visible part being “Bjartur on the second April, 1997”. A
potential rendering of the attributes in your second example would
probably be something like “Bjartur Thorlacius 1997-04-02", which I
think isn’t as good. This refers to my first point about authors
wanting to control the content.

Finally two other strikes against attributes are they're harder for
people learning HTML (which is one reason we have  over
role="section" etc), and we already have three (I’d argue) perfectly
good elements for the data you are suggesting adding via attributes:
*  for following-line attribution and notes
*  for datetime information
*  and  for citation information

There’s also the possibility of adding another inline element, such as
, which could let someone credit an author of a quote, or e.g.
to credit a photographer of an image together  and
.

For the reasons Jeremy mentioned, I actually hope the cite attribute
gets dropped in favour of a visible, explicit form of attribution.
While something like  and  in a  could work for
citation, I still don’t have a good idea about citing explicitly when
the citation is inline (on the last line of the block quote), or for
.

HTH

peace - oli studholme


Re: [whatwg] Microdata feedback

2011-07-12 Thread Henri Sivonen
On Thu, 2011-07-07 at 22:33 +, Ian Hickson wrote:
> The JSON algorithm now ends the crawl when it hits a loop, and replaces 
> the offending duplicate item with the string "ERROR".
> 
> The RDF algorithm preserves the loops, since doing so is possible with 
> RDF. Turns out the algorithm almost did this already, looks like it was an 
> oversight.

It seems to me that this approach creates an incentive for people who
want to do RDFesque things to publish deliberately non-conforming
microdata content that works the way they want for RDF-based consumers
but breaks for non-RDF consumers. If such content abounds and non-RDF
consumers are forced to support loopiness but extending the JSON
conversion algorithm in ad hoc ways, part of the benefit of microdata
over RDFa (treeness) is destroyed and the benefit of being well-defined
would be destroyed, too, for non-RDF consumption cases.

-- 
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/



Re: [whatwg] The blockquote element spec vs common quoting practices

2011-07-12 Thread Jeremy Keith
Bjartur wrote:
> I'd like to reemphasize that:
>> *unsupported by user agents*
> So you're saying that because attributes aren't rendered by default, user 
> agents will ignore them and thus we should not use them?

It's not a matter of "should not." Because user agents ignore them, we *do not* 
use them. And the main reason why we don't use them is that there's little to 
be gained: the information isn't presented to the end user.

Wishful thinking isn't going to make the @cite attribute any more useful or 
more widely adopted (either by authors or user agents).

> Putting attribution inside s seems like a hack around lax support 
> for attributes.

No, putting attribution inside s solves the real-world use-cases 
that Oli has gathered together.

>> I'm not sure I understand the question. Do you mean "presentational" as in
>> "not conveying semantics" or "presentational" as in "visible"?
>> 
> Not conveying semantics.

How can you say that the  element would not convey semantics, when it 
is defined as follows:

"The footer element represents a footer for its nearest ancestor sectioning 
content or sectioning root element. A footer typically contains information 
about its section such as who wrote it, links to related documents, copyright 
data, and the like."
—http://www.whatwg.org/specs/web-apps/current-work/multipage/sections.html#the-footer-element

...and the  element is a sectioning root. The semantics of those 
two elements match up perfectly.

> Interactive user agents should additionally make the cited resource available 
> in manner similar to how they present other hyperlinked resources

Can you please give an example of user agents presenting *invisible* 
hyperlinked resources? @longdesc, perhaps?

Jeremy

-- 
Jeremy Keith

a d a c t i o

http://adactio.com/




Re: [whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

2011-07-12 Thread Harald Alvestrand
Not a comment directly on the spec, but you might want to check what 
people are suggesting for interactive media handling in the WEBRTC 
working group.


Streaming is different from interactive media, but it would be a shame 
to have incompatibilities that can be avoided.


On 07/11/11 20:42, Aaron Colwell wrote:

Hi,

Based on comments in the File API Streaming
Blobs
thread and
my Extending HTML 5 video for adaptive
streaming
thread,
I decided on taking a stab at writing a MediaSource API
spec
for
streaming data to a media tag.

Please take a look at the
specand
provide some feedback.

I've tried to start with the simplest thing that would work and hope to
expand from there if need be. For now, I'm intentionally not trying to solve
the generic streaming file case because I believe there might be media
specific requirements around handling seeking especially if we intend to
support non-packetized media streams like WAV.

If the feedback is generally positive on this approach, I'll start working
on patches for WebKit&  Chrome so people can experiment with an actual
implementation.

Thanks,
Aaron