Re: [whatwg]

2009-10-16 Thread Jonas Sicking
On Fri, Oct 16, 2009 at 11:06 AM, Tab Atkins Jr.  wrote:
> Promoting this reply to top-level because I think it's crazy good.
>
> On Fri, Oct 16, 2009 at 11:09 AM, Aryeh Gregor  
> wrote:
>> On Fri, Oct 16, 2009 at 10:16 AM, Tab Atkins Jr.  
>> wrote:
>>> As well, this still doesn't answer the question of what to do with
>>> script links between the static content and the original page, like
>>> event listeners placed on content within the .  Do they get
>>> preserved?  How would that work?  If they don't, then some of the
>>> benefit of 'static' content is lost, since it will be inoperable for a
>>> moment after each pageload while the JS reinitializes.
>>
>> Script links should be preserved somehow, ideally.  I would like to
>> see this be along the lines of "AJAX reload of some page content,
>> without JavaScript and with automatically working URLs".
> [snip]
>> I'm drawn back to my original proposal.  The idea would be as follows:
>> instead of loading the new page in place of the new one, just parse
>> it, extract the bit you want, plug that into the existing DOM, and
>> throw away the rest.  More specifically, suppose we mark the dynamic
>> content instead of the static.
>>
>> Let's say we add a new attribute to , like ,
>> where "foo" is the id of an element on the page.  Or better, a
>> space-separated list of elements.  When the user clicks such a link,
>> the browser should do something like this: change the URL in the
>> navigation bar to the indicated URL, and retrieve the indicated
>> resource and begin to parse it.  Every time an element is encountered
>> that has an id in the onlyreplace list, if there is an element on the
>> current page with that id, remove the existing element and then add
>> the element from the new page.  I guess this should be done in the
>> usual fashion, first appending the element itself and then its
>> children recursively, leaf-first.
>
> This. Is. BRILLIANT.

[snip]

> Thoughts?

We actually have a similar technology in XUL called "overlays" [1],
though we use that for a wholly different purpose.

Anyhow, this is certainly an interesting suggestion. You can actually
mostly implement it using the primitives in HTML5 already. By using
pushState and XMLHttpRequest you can download the page and change the
current page's URI, and then use the DOM to replace the needed parts.
The only thing that you can't do is "stream" in the new content since
mutations aren't dispatched during parsing.

For some reason I'm still a bit uneasy about this feature. It feels a
bit fragile for some reason. One thing I can think of is what happens
if the load stalls or fails halfway through the load. Then you could
end up with a page that contains half of the old page and half the
new. Also, what should happen if the user presses the 'back' button?
Don't know how big of a problem these issues are, and they are quite
possibly fixable. I'm definitely curious to hear what developers that
would actually use this think of the idea.

/ Jonas

[1] https://developer.mozilla.org/en/XUL_Overlays


Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Oliver Hunt


On Oct 16, 2009, at 8:10 PM, Robert O'Callahan wrote:

On Sat, Oct 17, 2009 at 4:01 AM, Philip Taylor > wrote:

Yes, mostly. 
http://philip.html5.org/tests/canvas/suite/tests/index.2d.composite.uncovered.html
has relevant tests, matching what I believed the spec said - on
Windows, Opera 10 passes them all, Firefox 3.5 passes all except
'copy' (https://bugzilla.mozilla.org/show_bug.cgi?id=366283), Safari 4
and Chrome 3 fail them all.

(Looking at the spec quickly now, I don't see anything that actually
states this explicitly - the only reference to infinite transparent
black bitmaps is when drawing shadows. But
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#drawing-model
is phrased in terms of rendering shapes onto an image, then
compositing the image within the clipping region, so I believe it is
meant to work as I said (and definitely not by compositing only within
the extent of the shape drawn onto the image).)

Yes, I think that's pretty clear as written.

I think there is a reasonable argument that the spec should be  
changed so that compositing happens only within the shape. (In cairo  
terminology, all operators should be bounded.) Perhaps that's what  
Safari and Chrome developers want.


This is the behaviour of the original canvas implementation (and it  
makes a degree of sense -- it is possible to fake composition implying  
an infinite 0-alpha surrounding when the default composite operator  
does not do this, but vice versa is not possible).  That said I  
suspect we are unable to do anything this anymore :-/



Rob

--Oliver



Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Robert O'Callahan
On Sat, Oct 17, 2009 at 4:01 AM, Philip Taylor

> wrote:

> Yes, mostly.
> http://philip.html5.org/tests/canvas/suite/tests/index.2d.composite.uncovered.html
> has relevant tests, matching what I believed the spec said - on
> Windows, Opera 10 passes them all, Firefox 3.5 passes all except
> 'copy' (https://bugzilla.mozilla.org/show_bug.cgi?id=366283), Safari 4
> and Chrome 3 fail them all.
>
> (Looking at the spec quickly now, I don't see anything that actually
> states this explicitly - the only reference to infinite transparent
> black bitmaps is when drawing shadows. But
>
> http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#drawing-model
> is phrased in terms of rendering shapes onto an image, then
> compositing the image within the clipping region, so I believe it is
> meant to work as I said (and definitely not by compositing only within
> the extent of the shape drawn onto the image).)
>

Yes, I think that's pretty clear as written.

I think there is a reasonable argument that the spec should be changed so
that compositing happens only within the shape. (In cairo terminology, all
operators should be bounded.) Perhaps that's what Safari and Chrome
developers want.

Rob
-- 
"He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all." [Isaiah
53:5-6]


[whatwg] Dangling reference: Progress Events

2009-10-16 Thread Mark Pilgrim
r4133 removed progress events, but the reference section still lists
the "Progress Events" spec as a normative reference.

-Mark


Re: [whatwg] behavior

2009-10-16 Thread Boris Zbarsky

On 10/16/09 8:21 PM, Ben Laurie wrote:

The point is that if I think I'm sourcing something safe but it can be
overridden by the MIME type, then I have a problem.


Perhaps we need an attribute on  that says to only render the 
data if the server provided type and @type match?  That way you can 
address your use case by setting that attribute and we don't enable 
attacks on random servers by allowing @type to override the 
server-provided type?


-Boris


Re: [whatwg] behavior

2009-10-16 Thread Ben Laurie
On Fri, Oct 16, 2009 at 6:04 PM, Mike Shaver  wrote:
> On Fri, Oct 16, 2009 at 5:56 PM, Ben Laurie  wrote:
>> On Fri, Oct 16, 2009 at 5:48 PM, Boris Zbarsky  wrote:
>>> This is, imo, a much bigger problem than that of people embedding content
>>> from an untrusted site and getting content X instead of content Y,
>>> especially because content X can't actually access the page that contains
>>> it, right?
>>
>> Flash can, for example.
>
> If Flash can do bad things, then sourcing Flash from an untrusted site
> and getting malicious Flash with the expected MIME type doesn't seem
> like it's any better than getting malicious Quicktime or Java or
> whatever via a switched MIME type.  Is there something I'm missing?

The point is that if I think I'm sourcing something safe but it can be
overridden by the MIME type, then I have a problem.

>
> Mike
>


Re: [whatwg]

2009-10-16 Thread Tab Atkins Jr.
On Fri, Oct 16, 2009 at 5:08 PM, Markus Ernst  wrote:
>> (Also, in your examples you probably want @onlyreplace="content
>> navigation", since your nav is changing from page to page as well.
>
> Indeed. Or, maybe I'd do it slightly differently, somehow like:
>
> 
>      onClick="resetNavigation(this)">Broccoli
>      onClick="resetNavigation(this)">Leak
> 
>
> The resetNavigation() function then takes the class attribute from the old
> link and adds it to the clicked one. So the navigation can be static, and
> though its appearance remains consistent, whether page2.html is completely
> loaded, or only the parts defined in @onlyreplace.

Yup, that'd work too.

~TJ


Re: [whatwg]

2009-10-16 Thread Markus Ernst

Tab Atkins Jr. schrieb:
[...]


The body of page1.html could look like:

Recipies for vegetarians

 Lovely broccoli
 Take the broccoli and do the following:
 ...


 Broccoli
 Leak


The body of page2.html:

Recipies for meat eaters

 Lovely leak
 Take the leak and do the following:
 ...


 Broccoli
 Leak



[...]


(Also, in your examples you probably want @onlyreplace="content
navigation", since your nav is changing from page to page as well.


Indeed. Or, maybe I'd do it slightly differently, somehow like:


  Broccoli
  Leak


The resetNavigation() function then takes the class attribute from the 
old link and adds it to the clicked one. So the navigation can be 
static, and though its appearance remains consistent, whether page2.html 
is completely loaded, or only the parts defined in @onlyreplace.


Re: [whatwg] behavior

2009-10-16 Thread Mike Shaver
On Fri, Oct 16, 2009 at 5:56 PM, Ben Laurie  wrote:
> On Fri, Oct 16, 2009 at 5:48 PM, Boris Zbarsky  wrote:
>> This is, imo, a much bigger problem than that of people embedding content
>> from an untrusted site and getting content X instead of content Y,
>> especially because content X can't actually access the page that contains
>> it, right?
>
> Flash can, for example.

If Flash can do bad things, then sourcing Flash from an untrusted site
and getting malicious Flash with the expected MIME type doesn't seem
like it's any better than getting malicious Quicktime or Java or
whatever via a switched MIME type.  Is there something I'm missing?

Mike


Re: [whatwg] behavior

2009-10-16 Thread Ben Laurie
On Fri, Oct 16, 2009 at 5:48 PM, Boris Zbarsky  wrote:
> On 10/16/09 4:12 PM, Ben Laurie wrote:
>>
>> I realise this is only one of dozens of ways that HTML is unfriendly
>> to security, but, well, this seems like a bad idea - if the page
>> thinks it is embedding, say, some flash, it seems like a pretty bad
>> idea to allow the (possibly untrusted) site providing the "flash" to
>> run whatever it wants in its place.
>
> This cuts both ways.  If a site allows me to upload images and I upload an
> HTML file with some script in it and tell it it's a GIF (e.g. via the name)
> an then put an  data="http://this.other.site/my.gif";> on my site...  then I just injected
> script into a different domain if we let @type override the server-provided
> header.
>
> This is, imo, a much bigger problem than that of people embedding content
> from an untrusted site and getting content X instead of content Y,
> especially because content X can't actually access the page that contains
> it, right?

Flash can, for example.

>
> -Boris
>


Re: [whatwg] behavior

2009-10-16 Thread Boris Zbarsky

On 10/16/09 4:12 PM, Ben Laurie wrote:

I realise this is only one of dozens of ways that HTML is unfriendly
to security, but, well, this seems like a bad idea - if the page
thinks it is embedding, say, some flash, it seems like a pretty bad
idea to allow the (possibly untrusted) site providing the "flash" to
run whatever it wants in its place.


This cuts both ways.  If a site allows me to upload images and I upload 
an HTML file with some script in it and tell it it's a GIF (e.g. via the 
name) an then put an data="http://this.other.site/my.gif";> on my site...  then I just 
injected script into a different domain if we let @type override the 
server-provided header.


This is, imo, a much bigger problem than that of people embedding 
content from an untrusted site and getting content X instead of content 
Y, especially because content X can't actually access the page that 
contains it, right?


-Boris


Re: [whatwg] HTMLness bit on script-created documents

2009-10-16 Thread Ian Hickson
On Thu, 8 Oct 2009, Henri Sivonen wrote:
>
> Gecko currently looks at the doctype passed to createDocument() in order to
> decide what interfaces to offer on the returned document and in order to
> determine if the HTMLness bit gets set.

All interfaces should be supported, per HTML5.

The bit should not be set, per HTML5.


> DOM Level 3 Core mentions that DOM Level 2 HTML specifies a method 
> called createHTMLDocument(). I see such a method in DOM Level 2 HTML CR 
> http://www.w3.org/TR/2000/CR-DOM-Level-2-2510/html.html but I don't 
> see it in the REC http://www.w3.org/TR/DOM-Level-2-HTML/html.html. Gecko 
> doesn't implement this method but Opera and WebKit do.
>
> Is there a reason why HTML5 doesn't mention createHTMLDocument()?

As you say, it wasn't in the DOM2 HTML REC. I'd rather not have it at all, 
if we don't need it.


On Thu, 8 Oct 2009, Olli Pettay wrote:
>
> The HTMLness wasn't implemented because of ACID3. It was implemented 
> because it was wanted that .createDocument() could return documents 
> which might get created in other ways too (like loading a page). So it 
> is possible to create svg/html/xhtml/etc documents.

That's the interface, but why the HTMLness bit? (Affects things like 
document.write().)


Unless there's a good use case, I would suggest we don't add a way to 
create such documents just for the sake of it.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Charset sniffing from XML prolog

2009-10-16 Thread Ian Hickson
On Wed, 7 Oct 2009, Kartikaya Gupta wrote:
>
> If a document is served as text/html, but contains an XML prolog with an 
> encoding attribute, it seems that all Firefox, Opera, and Chrome all 
> pick up the encoding from the prolog and use it when parsing the rest of 
> the document. (IE6 does not). The HTML5 spec doesn't seem to include 
> XML-prolog checking in its encoding sniffing algorithm, should it?
> 
> 
> insert utf-8 content here, or alert(document.inputEncoding) for 
> browsers that support it

On Thu, 8 Oct 2009, Kartikaya Gupta wrote:
> 
> So then is this behavior getting axed or specced? The site in question 
> that relies on this behavior is http://bell.mobi/primary - it's not as 
> noticeable in the english-locale version but if you switch to a french 
> locale you get a bunch of french encoded as utf-8. Browsers with the 
> prolog sniffing will render it fine but others will show garbage.
> 
> I'd be happier with not having to change my code to deal with this 
> website, since it will occasionally show garbage even in utf-8.

UTF-8 is detectable, so if there's no other encoding declarations, and if 
this is the only site we know of, I'd rather encourage you to add fallback 
UTF-8 detection (as allowed by the spec) rather than add this.

Since IE apprently doesn't do this, I'd also rather not add yet more 
features like this.

So in the absence of more compelling reasons to add this, I'd rather get 
Opera and WebKit to remove the support for this, than add more. (As I 
understand it, Mozilla's new HTML5 parser already removes support for this 
particular "feature".)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg]

2009-10-16 Thread Tab Atkins Jr.
On Fri, Oct 16, 2009 at 4:10 PM, Markus Ernst  wrote:
> Yes it looks like an AJAX killer.

Well, for a particular common, useful pattern.  AJAX will still be
alive and well for solving more general classes of problems.

> Actually the problem I mentioned for Aryehs first proposal remains - still,
> a web designer could go wrong, for example when making a static website by
> changing another one he/she has made earlier:
>
> The body of page1.html could look like:
>
> Recipies for vegetarians
> 
>  Lovely broccoli
>  Take the broccoli and do the following:
>  ...
> 
> 
>  Broccoli
>  Leak
> 
>
> The body of page2.html:
>
> Recipies for meat eaters
> 
>  Lovely leak
>  Take the leak and do the following:
>  ...
> 
> 
>  Broccoli
>  Leak
> 
>
> Note that the author forgot to change the page header of the meat eaters
> site he/she had used as raw material. The author will test the site and
> always see it correctly, while someone who comes from a deep link will see
> the meat eaters header.
>
> Anyway I think that this error is much less likely to be made with the  onlyreplace> solution. In many cases, such as template-based CMS sites, the
> static elements are made in one place only, anyway. I think this is a
> problem we could live with, in view of the benefits that this solution
> brings.

Ah, right, that is a potential issue.  A non-obvious change that can
missed even by an author doing proper checks.

Still, as you say, most sites these days are produced by CMSes that
centralize the static template, so a change in one place would be
properly reflected everywhere.

(Also, in your examples you probably want @onlyreplace="content
navigation", since your nav is changing from page to page as well.
That's a bug that would be found out immediately, though.)

~TJ


Re: [whatwg]

2009-10-16 Thread Markus Ernst

Tab Atkins Jr. schrieb:

Promoting this reply to top-level because I think it's crazy good.

[...]

Let's say we add a new attribute to , like ,
where "foo" is the id of an element on the page.  Or better, a
space-separated list of elements.  When the user clicks such a link,
the browser should do something like this: change the URL in the
navigation bar to the indicated URL, and retrieve the indicated
resource and begin to parse it.  Every time an element is encountered
that has an id in the onlyreplace list, if there is an element on the
current page with that id, remove the existing element and then add
the element from the new page.  I guess this should be done in the
usual fashion, first appending the element itself and then its
children recursively, leaf-first.


This. Is. BRILLIANT.


Yes it looks like an AJAX killer.


The only problem I can see with this is that it's possible for authors
to believe that they only need to actually write a single full page,
and can just link to fragments containing only the chunk of content to
be replaced.  This would mostly break bookmarking and deeplinking, as
visitors would just receive a chunk of unstyled content separated from
the overall page template.  However, because it breaks so *visibly*
and reliably (unlike, say, framesets, which just break bookmarking by
sending you to the 'main page'), I think there would be sufficient
pressure for authors to get this right, especially since it's so
*easy* to get it right.


Actually the problem I mentioned for Aryehs first proposal remains - 
still, a web designer could go wrong, for example when making a static 
website by changing another one he/she has made earlier:


The body of page1.html could look like:

Recipies for vegetarians

  Lovely broccoli
  Take the broccoli and do the following:
  ...


  Broccoli
  Leak


The body of page2.html:

Recipies for meat eaters

  Lovely leak
  Take the leak and do the following:
  ...


  Broccoli
  Leak


Note that the author forgot to change the page header of the meat eaters 
site he/she had used as raw material. The author will test the site and 
always see it correctly, while someone who comes from a deep link will 
see the meat eaters header.


Anyway I think that this error is much less likely to be made with the 
 solution. In many cases, such as template-based CMS 
sites, the static elements are made in one place only, anyway. I think 
this is a problem we could live with, in view of the benefits that this 
solution brings.


Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Robert O'Callahan
On Sat, Oct 17, 2009 at 5:47 AM, Charles Pritchard  wrote:

> In regard to this: 'There is currently no definition of what the "extent"
> of a shape is'
>
> While I want a common standard, and I think we are in agreement here, that
> we'll
> be defining Image A as an infinite bitmap, I believe that this statement
> should be addressed.
>

If nothing in the spec depends on a definition of the "extent" of a shape,
then the spec should not define it.

Rob
-- 
"He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all." [Isaiah
53:5-6]


Re: [whatwg]

2009-10-16 Thread Tab Atkins Jr.
A few public responses to issues/questions brought up in IRC: (thanks,
Aryeh and Philip!)

How is this better than  and ?
=
It's significantly better in multiple ways, actually.

1. s, like frames before them, break bookmarking.  If a user
bookmarks the page and returns to it later, or gets deeplinked via a
search engine or a link from a friend, the  won't show the
correct content.  The only way around this is some fairly non-trivial
url-hacking with javascript, altering the displayed url as the user
navigates the iframe, and parsing a deeplink url into an appropriate
url for the iframe on initial pageload.  @onlyreplace, on the other
hand, automatically works perfectly with bookmarking.  The UA still
changes urls and inserts history appropriately as you navigate, and on
a fresh pageload it just requests the ordinary static page showing the
appropriate content.

2.  can only navigate one iframe at a time.  Many/most
sites, though, have multiple dynamic sections scattered throughout the
page.  The main site for my company, frex, has 3 (content,
breadcrumbs, and section nav) which *cannot* be combined to display as
a single , at least not without including a whole bunch of
static content as well.  You'd have use javascript to hook the links
and manually navigate the additional iframes.  @onlyreplace, on the
other hand, handles this seamlessly - just include multiple ids in the
attribute value.

3. s require you to architect your site around them.  Rather
than a series of independent pages, you must create a single master
page and then a number of content-chunk mini-pages.  This breaks
normal authoring practices (though in some ways it's easier), and
requires you to work hard to maintain accessibility and such in the
face of these atrophied mini-pages.  @onlyreplace works on full,
ordinary pages.  It's *possible* to link to a content-chunk mini-page
instead, but this will spectacularly break if you ever deeplink
straight to one of the pages, so it should become automatic for
authors to do this correctly.

4. s have dubious accessibility and search effects.  I don't
know if bots can navigate  links appropriately.  I also
believe that this causes problems with screen-readers.  While either
of these sets of UAs can be rewritten to handle s better (and
handle @onlyreplace replacement as well), with @onlyreplace they
*also* have the option of just completely ignoring the attribute and
navigating the site as an ordinary multi-page app.  Legacy UAs will
automatically do so, providing perfect backwards compatibility.


Isn't if inefficient to request the whole page and then throw most of
it out?  With proper AJAX you can just request the bits you want.
==
This is a valid complaint, but one which I don't think is much of a
problem for several reasons.

1. One of the big beneficiaries of @onlyreplace will be fairly
ordinary sites that are currently using an ordinary multi-page
architecture.  All they have to do is add a single tag to the 
of their pages, and they automatically get the no-flicker refresh of a
single-page app.  These sites are *already* grabbing the whole page on
each request, so @onlyreplace won't make them take any *additional*
bandwidth.  It will merely make the user experience smoother by
reducing flicker and keeping js-heavy elements of the page template
alive.

2. Even though site templates are usually weighter than the dynamic
portions of a site, it's still not a very significant wasteage.  For
comparison, my company's main site is roughly 16kb of template, and
somewhere around 2-3k of dynamic page content.  (Aryeh - I gave you
slightly different numbers in chat because I was counting wrong.)  So
that's a good 85% of each request being thrown away as irrelevant.
However, it's also *only 16kb*, and that's UNCOMPRESSED - after
standard gzip compression the template is worth maybe 5kb.  So I waste
5kb of bandwidth per request.  Big deal.  (According to Philip`, my
company's site's weight is just on the low side of average.)

3. Because this is a declarative mechanism (specifying WHAT you want,
not HOW to get it), it has great potential for transparent
optimizations behind the scenes.  For example, the browser could tell
the server which bits it's interested in replacing, and the server
could automatically strip full pages down to only those chunks.  This
would eliminate virtually all bandwidth waste, while still being
completely transparent to the author - they just create ordinary full
static pages.  Heck, you could even handle this yourself with JS and a
bit of server-side coding, intercepting clicks and rewriting the urls
to pass the @onlyreplace data in a query parameter, and have a
server-side script determine what to return based on that.  Less
automatic, but fairly simple, and still easier than using JS to do
this in the normal AJAX manner.  (And UAs that don't run javascript
but do supp

Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Robert O'Callahan
On Sat, Oct 17, 2009 at 5:47 AM, Charles Pritchard  wrote:

> Then, should we explicitly state it, so that the next versions of Chrome
> and Safari
> are pressured to follow?
>

That shouldn't be necessary. If the composition operation was limited to the
extents of the source shape, the spec would have to say this explicitly and
define what those extents are. I don't see how you can argue from silence
that the composition operation should be bounded to some unspecified region.

Rob
-- 
"He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all." [Isaiah
53:5-6]


Re: [whatwg] behavior

2009-10-16 Thread Ben Laurie
On Thu, Aug 13, 2009 at 10:05 PM, Ian Hickson  wrote:
> On Thu, 6 Aug 2009, Andrew Oakley wrote:
>>
>> The rules in the HTML5 spec for which plugin to load for an  do
>> not seem to be followed by any browser, and in some cases are different
>> to behavior that is common to Opera, Webkit and Gecko (I haven't tested
>> with IE due to its lack of nsplugin support).
>>
>> Most notably HTML5 says that the Content-Type header is used in
>> preference to the type attribute, whereas the browsers seem to honour
>> the attribute in preference to the header.  (If the spec is changed to
>> match the browsers behaviour then the conditions on when to load a new
>> plugin also need to be changed.)  HTML5 also seems to prefer the type
>> attribute on 

Re: [whatwg] framesets

2009-10-16 Thread Peter Brawley

Rimantas


Eh? He didn't say that; you're quoting me.
  
I did, in fact, at least I meant that.


I wrote "browsers  own bookmarks, database users own database table 
rows, so usually you shouldn't bookmark database table rows, and much 
follows from that, therefore saying server issues don't bear on this 
issue is IMO astonishingly & quite wrongly blinkered." You agree with it?



Framsets do not make it easy. They make it harder to boomkark such URL, but in
no way they make it easier for your app to block it.
You still must to do all the logic on the server side.


There we disagree.

PB

-

Rimantas Liubertas wrote:

Eh? He didn't say that; you're quoting me.



I did, in fact, at least I meant that.

  

Browsers  own bookmarks, database
users own database table rows, so it must be possible in database
maintenance webapps to prevent bookmarking of elements which represent
database table rows. And again, I agree that framesets do not by themselves
block such bookmarking; they just make it easy to do so.



Framsets do not make it easy. They make it harder to boomkark such URL, but in
no way they make it easier for your app to block it.
You still must to do all the logic on the server side.


Regards,
Rimantas
--
http://rimantas.com/



No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.421 / Virus Database: 270.14.20/2440 - Release Date: 10/16/09 06:32:00


  


Re: [whatwg] window.setInterval if visible.

2009-10-16 Thread Gregg Tavares
On Thu, Oct 15, 2009 at 1:53 PM, Markus Ernst  wrote:

> Gregg Tavares schrieb:
>
>> I was wondering if there as been a proposal for either an optional
>> argument to setInterval that makes it only callback if the window is visible
>> OR maybe a window.setRenderInterval.
>>
>> Here's the issue that seems like it needs to be solved.
>>
>> Currently, AFAIK, the only way to do animation in HTML5 + JavaScript is
>> using setInterval. That's great but it has the problem that even when the
>> window is minimized or the page is not the front tab, JavaScript has no way
>> to know to stop animating.  So, for a CPU heavy animation using canvas 2d or
>> canvas 3d, even a hidden tab uses lots of CPU. Of course the browser does
>> not copy the bits from the canvas to the window but JavaScript is still
>> drawing hundreds of thousands of pixels to the canvas's internal image
>> buffer through canvas commands.
>>
>
> [...]
>
>>
>> There are probably other possible solutions to this problem but it seems
>> like the easiest would be either
>>
>> *) adding an option to window.setInterval or only callback if the window
>> is visible
>>
>> *) adding window.setIntervalIfVisible (same as the previous option really)
>>
>> A possibly better solution would be
>>
>> *) element.setIntervalIfVisible
>>
>> Which would only call the callback if that particular element is visible.
>>
>
> From a performance point of view it might even be worth thinking about the
> contrary: Allow UAs to stop the execution of scripts on non-visible windows
> or elements by default, and provide a method to explicitly specify if the
> execution of a script must not be stopped.
>
> If you provide methods to check the visibility of a window or element, you
> leave it up to the author to use them or not. I think performance issues
> should rather be up to the UA.
>

I agree that would be ideal. Unfortunately, current webpages already expect
setInternval to function even when they are not visible. web based chat and
mail clients come to mind as examples. So, unfortunately, it doesn't seem
like a problem a UA can solve on it's own.

On the otherhand, if the solution is as simple as add a flag to setInterval
then it's at least a very simple change for those apps that want to not hog
the CPU when not visible.


[whatwg]

2009-10-16 Thread Tab Atkins Jr.
Promoting this reply to top-level because I think it's crazy good.

On Fri, Oct 16, 2009 at 11:09 AM, Aryeh Gregor  wrote:
> On Fri, Oct 16, 2009 at 10:16 AM, Tab Atkins Jr.  wrote:
>> As well, this still doesn't answer the question of what to do with
>> script links between the static content and the original page, like
>> event listeners placed on content within the .  Do they get
>> preserved?  How would that work?  If they don't, then some of the
>> benefit of 'static' content is lost, since it will be inoperable for a
>> moment after each pageload while the JS reinitializes.
>
> Script links should be preserved somehow, ideally.  I would like to
> see this be along the lines of "AJAX reload of some page content,
> without JavaScript and with automatically working URLs".
[snip]
> I'm drawn back to my original proposal.  The idea would be as follows:
> instead of loading the new page in place of the new one, just parse
> it, extract the bit you want, plug that into the existing DOM, and
> throw away the rest.  More specifically, suppose we mark the dynamic
> content instead of the static.
>
> Let's say we add a new attribute to , like ,
> where "foo" is the id of an element on the page.  Or better, a
> space-separated list of elements.  When the user clicks such a link,
> the browser should do something like this: change the URL in the
> navigation bar to the indicated URL, and retrieve the indicated
> resource and begin to parse it.  Every time an element is encountered
> that has an id in the onlyreplace list, if there is an element on the
> current page with that id, remove the existing element and then add
> the element from the new page.  I guess this should be done in the
> usual fashion, first appending the element itself and then its
> children recursively, leaf-first.

This. Is. BRILLIANT.

Single-page apps are already becoming common for js-heavy sites.  The
obvious example is something like Gmail, but it's becoming more common
everywhere.  The main benefit of doing this is that you never dump the
script context, so you only have to parse/execute/apply scripting
*once* across the page, making really heavy libraries actually usable.
 In fact, writing a single-page app was explicitly given as a
suggestion in the "Global Script" thread.  Even in contexts with
lighter scripts, there can still be substantial run-time rewriting of
the page which a single-page app can avoid doing multiple times (frex,
transforming a nested list into a tree control).

The problem, though, is that single-page apps are currently a bit
clunky to write.  They require javascript to function, and the
necessary code is relatively large and clunky, even in libraries like
jQuery which make the process much simpler.  It requires you to
architect your site around the design, either producing a bunch of
single-widget files that you query for and slap into place, or some
relatively complex client-side logic to parse data structures into
HTML.  It's also very hard to get accessibility and graceful
degradation right, requiring you to basically completely duplicate
everything in a static form.  Finally, preserving
bookmarkability/general deeplinking (such as from a search engine)
requires significant effort with history management and url hacking.

Aryeh's suggestion, though, solves *all* of these problems with a
single trivial attribute.  You first design a static multi-page site
like normal, with the only change being this attribute on your
navigation links specifying the dynamic/replaceable portions of the
page.  In a legacy client, then, you have a perfectly serviceable
multipage site, with the only problems being the reloading of js and
such on each pageload.

In a supporting client, though, clicking a link causes the browser to
perform an ordinary request for the target page (requiring *no*
special treatment from the author), parse/treebuild the new page, and
then yank out the relevant fragments and replace bits in the current
page with them.  The url/history automatically updates properly;
bookmarking the page and visiting it later will take you to
appropriate static page that already exists.  Script context is
maintained, listeners stay around, overall page state remains stable
across 'pageloads'.

It's a declarative, accessible, automatic, and EASY way of creating
the commonest form of single-page apps.

This brings benefits to more than just the traditional js-heavy apps.
My company's web site utilizes jQuery for a lot of small upgrades in
the page template (like a hover-expand accordion for the main nav),
and for certain things on specific pages.  I know that loading the
library, and applying the template-affecting code, slows down my page
loads, but it's not significant enough to be worth the enormous effort
to create an accessible, search-engine friendly single-page app.  This
would solve my problem trivially, though, providing a better overall
UI to my visitors (snappier page loads) without any real effort on my
part, a

Re: [whatwg] framesets

2009-10-16 Thread Rimantas Liubertas
> Eh? He didn't say that; you're quoting me.

I did, in fact, at least I meant that.

> Browsers  own bookmarks, database
> users own database table rows, so it must be possible in database
> maintenance webapps to prevent bookmarking of elements which represent
> database table rows. And again, I agree that framesets do not by themselves
> block such bookmarking; they just make it easy to do so.

Framsets do not make it easy. They make it harder to boomkark such URL, but in
no way they make it easier for your app to block it.
You still must to do all the logic on the server side.


Regards,
Rimantas
--
http://rimantas.com/


Re: [whatwg] framesets

2009-10-16 Thread Peter Brawley

Mike,

>I think the point Rimantas is making is that you aren't bookmarking 
that node.
>The fact that one node in the treeview represents one table row leaves 
out the

>reality that the node contains a URL and that clicking on the node simply
>submits a URL to your application and awaits an HTML response.

Not sure what your point is. Obviously "bookmarking a node" is shorthand 
for bookmarking a UI element which webapp logic ties to a particular 
database element. What makes you think I forgot that?


>Users of the application are bookmarking the URL that the
>application uses to retrieve that row and format the response as HTML.

That's exactly what we prevent the user from doing. She may use the UI 
representation of a node to instruct the webapp to open up its tree 
branch or fetch its detail, but may not persist that link as a bookmark.


>So, as Rimantas mentioned, since the browser owns the bookmark ...

Eh? He didn't say that; you're quoting me. Browsers  own bookmarks, 
database users own database table rows, so it must be possible in 
database maintenance webapps to prevent bookmarking of elements which 
represent database table rows. And again, I agree that framesets do not 
by themselves block such bookmarking; they just make it easy to do so.


PB

-

Mike Ressler wrote:

PB,

I think the point Rimantas is making is that you aren't bookmarking 
that node.  The fact that one node in the treeview represents one 
table row leaves out the reality that the node contains a URL and that 
clicking on the node simply submits a URL to your application and 
awaits an HTML response.


Users of the application are bookmarking the URL that the application 
uses to retrieve that row and format the response as HTML.  So, as 
Rimantas mentioned, since the browser owns the bookmark (and therefore 
the URL) and the application itself owns the semantic knowledge of 
what that URL means, the application is the appropriate agent to 
control what is done when that URL is submitted to it.


I thought you had agreed a while ago that there are a lot of inventive 
ways of disallowing bookmarking of the particular row in a treeview?


Mike

On Fri, Oct 16, 2009 at 12:19 PM, Peter Brawley > wrote:


Rimantas,

>How on Earth can you bookmark database table rows? Your database knows
>nothing where its rows go, the browser does not know where does HTML
>originates in: it may be DB, may be XML transformed via XSLT, may be static
>files on the server.



?! In a data-driven treeview, one node represents one table row.

PB

-

Rimantas Liubertas wrote:

OK and for clarity's sake I'll again repeat framesets don't solve the
navigation problem, they just make it easier to solve than any other
available proved solution, and this wee problem is that browsers  own
bookmarks, database users own database table rows, so usually you shouldn't
bookmark database table rows, and much follows from that, therefore saying
server issues don't bear on this issue is IMO astonishingly & quite wrongly
blinkered.


How on Earth can you bookmark database table rows? Your database knows
nothing where its rows go, the browser does not know where does HTML
originates in: it may be DB, may be XML transformed via XSLT, may be static
files on the server.

All you can bookmark is some URL. On the server there must be an
application, which maps that particular URL to this particular database
row, retrieves it, transforms it into HTML and sends to browser.
This application then is the right place to solve that "bookmarking"
problem.
It starts to look like you are trying to solve server side problems
(restricting access, of whatever denying bookmarking is supposed to solve)
via client side. Not going to work.

Regards,
Rimantas
--
http://rimantas.com/

No virus found in this incoming message. Checked by AVG -
www.avg.com 
 
Version: 8.5.421 / Virus Database: 270.14.20/2440 - Release Date: 10/16/09 06:32:00


  






No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.421 / Virus Database: 270.14.20/2440 - Release Date: 10/16/09 06:32:00


  


Re: [whatwg] framesets

2009-10-16 Thread Mike Ressler
PB,

I think the point Rimantas is making is that you aren't bookmarking that
node.  The fact that one node in the treeview represents one table row
leaves out the reality that the node contains a URL and that clicking on the
node simply submits a URL to your application and awaits an HTML response.

Users of the application are bookmarking the URL that the application uses
to retrieve that row and format the response as HTML.  So, as Rimantas
mentioned, since the browser owns the bookmark (and therefore the URL) and
the application itself owns the semantic knowledge of what that URL means,
the application is the appropriate agent to control what is done when that
URL is submitted to it.

I thought you had agreed a while ago that there are a lot of inventive ways
of disallowing bookmarking of the particular row in a treeview?

Mike

On Fri, Oct 16, 2009 at 12:19 PM, Peter Brawley wrote:

>  Rimantas,
>
> >How on Earth can you bookmark database table rows? Your database knows
> >nothing where its rows go, the browser does not know where does HTML
> >originates in: it may be DB, may be XML transformed via XSLT, may be static
> >files on the server.
>
> ?! In a data-driven treeview, one node represents one table row.
>
> PB
>
> -
>
> Rimantas Liubertas wrote:
>
>  OK and for clarity's sake I'll again repeat framesets don't solve the
> navigation problem, they just make it easier to solve than any other
> available proved solution, and this wee problem is that browsers  own
> bookmarks, database users own database table rows, so usually you shouldn't
> bookmark database table rows, and much follows from that, therefore saying
> server issues don't bear on this issue is IMO astonishingly & quite wrongly
> blinkered.
>
>
>  How on Earth can you bookmark database table rows? Your database knows
> nothing where its rows go, the browser does not know where does HTML
> originates in: it may be DB, may be XML transformed via XSLT, may be static
> files on the server.
>
> All you can bookmark is some URL. On the server there must be an
> application, which maps that particular URL to this particular database
> row, retrieves it, transforms it into HTML and sends to browser.
> This application then is the right place to solve that "bookmarking"
> problem.
> It starts to look like you are trying to solve server side problems
> (restricting access, of whatever denying bookmarking is supposed to solve)
> via client side. Not going to work.
>
> Regards,
> Rimantas
> --http://rimantas.com/
>
> --
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
>
> Version: 8.5.421 / Virus Database: 270.14.20/2440 - Release Date: 10/16/09 
> 06:32:00
>
>
>
>


Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Charles Pritchard

On 10/16/09 8:01 AM, Philip Taylor wrote:

Windows, Opera 10 passes them all, Firefox 3.5 passes all except
'copy' (https://bugzilla.mozilla.org/show_bug.cgi?id=366283), Safari 4
and Chrome 3 fail them all.
   

I've read that this was intentional on the part of WebKit.

(Looking at the spec quickly now, I don't see anything that actually
states this explicitly - the only reference to infinite transparent
black bitmaps is when drawing shadows. But
   
Then, should we explicitly state it, so that the next versions of Chrome 
and Safari

are pressured to follow?

I agree, that the spec has an infinite bitmap for filters: shadows are a 
unique step

in the rendering pipeline.

...

In regard to this: 'There is currently no definition of what the 
"extent" of a shape is'


While I want a common standard, and I think we are in agreement here, 
that we'll
be defining Image A as an infinite bitmap, I believe that this statement 
should be addressed.


The extent of the shape is geometric, it's a rectangle, and it's not 
related to the fill [re: transparent pixels].
It can be calculated for an ellipse and for an arbitrary path and 
extended to include a shadow, should one exist.

With multiple sub-paths, the extent encompasses all of the subpaths.

The only difficulty in implementation that I see is with text:
TextMetrics does not currently supply a height value, for reasons 
unknown to me.
It's quite possible to calculate the extent of a text box, and is 
present in many APIs.


Extents are usually calculated within the rendering engine,
and so it's likely that optimizations can be made there, for the 
compositing step,
so that it's unnecessary to compare pixels outside of the shape extent 
when compositing,
regardless of the spec. But, I am certain that the WebKit devs decided 
it would
be more efficient, just as they made similar decision in their aliasing 
method on clipped paths.


If my statements are factually inaccurate, I'm sure someone on this list 
will take notice, and correct me.



-Charles


Re: [whatwg] framesets

2009-10-16 Thread Peter Brawley

Rimantas,


How on Earth can you bookmark database table rows? Your database knows
nothing where its rows go, the browser does not know where does HTML
originates in: it may be DB, may be XML transformed via XSLT, may be static
files on the server.


?! In a data-driven treeview, one node represents one table row.

PB

-

Rimantas Liubertas wrote:

OK and for clarity's sake I'll again repeat framesets don't solve the
navigation problem, they just make it easier to solve than any other
available proved solution, and this wee problem is that browsers  own
bookmarks, database users own database table rows, so usually you shouldn't
bookmark database table rows, and much follows from that, therefore saying
server issues don't bear on this issue is IMO astonishingly & quite wrongly
blinkered.



How on Earth can you bookmark database table rows? Your database knows
nothing where its rows go, the browser does not know where does HTML
originates in: it may be DB, may be XML transformed via XSLT, may be static
files on the server.

All you can bookmark is some URL. On the server there must be an
application, which maps that particular URL to this particular database
row, retrieves it, transforms it into HTML and sends to browser.
This application then is the right place to solve that "bookmarking"
problem.
It starts to look like you are trying to solve server side problems
(restricting access, of whatever denying bookmarking is supposed to solve)
via client side. Not going to work.

Regards,
Rimantas
--
http://rimantas.com/



No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.421 / Virus Database: 270.14.20/2440 - Release Date: 10/16/09 06:32:00


  


Re: [whatwg] No interface flicker across page loads, without JavaScript (was: framesets)

2009-10-16 Thread Aryeh Gregor
On Fri, Oct 16, 2009 at 10:16 AM, Tab Atkins Jr.  wrote:
> Indeed, script changes should persist.  The problem he was
> highlighting, though, was the fact that a 'site bug' like that would
> be very easy to have happen accidentally.  It could even go unnoticed
> by the site developers, if they always come in through the front page
> and the content is correct there - only users following search engine
> links or bookmarks deep into the site would see the obsolete content,
> and it would *never go away* during that browsing session.
>
> This error seems like it would be very easy to make.

Hmm.  Maybe.

> As well, this still doesn't answer the question of what to do with
> script links between the static content and the original page, like
> event listeners placed on content within the .  Do they get
> preserved?  How would that work?  If they don't, then some of the
> benefit of 'static' content is lost, since it will be inoperable for a
> moment after each pageload while the JS reinitializes.

Script links should be preserved somehow, ideally.  I would like to
see this be along the lines of "AJAX reload of some page content,
without JavaScript and with automatically working URLs".

> I would hope that authors never did that!  That means that if a user
> deeplinks straight into the site, they'll get the empty element.  The
> hash won't help them, since it's their first pageview.  *Hopefully*
> they'll swing by a page that has the actual contents and the hashfail
> would trigger an update, but that's not a guarantee, and in the
> meantime they have an empty element there.

I meant in conjunction with an HTTP header the browser would send,
like "Static-Hashes", that contains the hashes of all known 
elements.  This is like the Static-IDs that I described in my first
post.  The idea would be that a script could chop out the unneeded
parts on a per-request basis.  However, I think SDCH is a better
solution here.

> I think being updated is more important than persisting changes to
> (now out-of-date) content.

It depends on how important the changes are.  If for some reason you
have a  in , and the user has entered tons of text,
saving it is fairly important.  Although you should be able to hit
"back" to retrieve it, actually, so maybe not *that* important.

> One of the big reasons Gmail is so AJAXy is because of the heavy
> script lifting it has to do on each page load.  AJAX lets them persist
> the script while updating the content.   wouldn't help with
> that.

That's why script needs to persist.  My initial proposal doesn't
handle that well at all.

> Only for the first pageload.

The first page load is by far the most important.

> And separate pages for each interface widget isn't bad.  Heck, it's
> easier to maintain with everything self-contained.

Handling everything in one request is *much* simpler from the POV of
server-side scripting.  If it's separate requests, you can typically
only communicate between them if you a database of some kind.  That's
a real pain.  You're running several instances of the script which all
need to produce consistent output, and that's a lot harder than if
it's just one instance.  What if different cookies end up being sent
to different frames, for instance?  That's very possible if the user
gets logged out at some point, say.  The new page load needs to be
able to invalidate the other parts of the page somehow.

> True.  Minting a new element might be a better deal here, but having
> it inherit much of the semantics of .  Then you can
> have it contain fallback content for browsers that don't implement
> , and use @src for browsers that do.  That would also allow us
> to bypass any of the  complications that might unnecessarily
> complicate use or implementation.

I still don't like the requirement for multiple pages.  It might not
be a big deal if you're dealing mainly with static content, but for
complex server-side scripts I think it would be a real pain.

So, here's a preliminary description of a use-case.  I'm not sure it's sane yet.

Use Case: A page should be able to instruct that when a user follows a
link, only part of the page is reloaded, while the rest stays fixed.

Requirements:
1) Little to no JavaScript should be required.  Large JavaScript
frameworks should not be necessary to get basic persistence of
interface state.

2) Static parts of the page should not have their state discarded,
either script-related state (e.g., registered event handlers) or other
state (e.g., user-entered text).

3) It should be possible for user agents to implement the feature so
that the static parts of the page don't flicker or jump around unless
they've actually changed.  (This might or might not be an actual
conformance requirement, but it should be possible for them to do it
if they want.)

4) It should be possible to easily attach this to an existing set of
static pages, or JavaScript-light pages produced by a web application.
 Ideally, it should be possible to do by addi

Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Philip Taylor
On Fri, Oct 16, 2009 at 2:25 PM, Robert O'Callahan  wrote:
> On Sat, Oct 17, 2009 at 1:06 AM, Philip Taylor 
> wrote:
>>
>> I think the spec is clear on this (at least when I last looked; not
>> sure if it's changed since then). Image A is infinite and filled with
>> transparent black, then you draw the shape onto it (with no
>> compositing yet), and then you composite the whole of image A (using
>> globalCompositeOperation) on top of the current canvas bitmap. With
>> some composite operations that's a different result than if you only
>> composited pixels within the extent of the shapes you drew onto image
>> A.
>
>
> Ah, so you mean Firefox is right in this case?

Yes, mostly. 
http://philip.html5.org/tests/canvas/suite/tests/index.2d.composite.uncovered.html
has relevant tests, matching what I believed the spec said - on
Windows, Opera 10 passes them all, Firefox 3.5 passes all except
'copy' (https://bugzilla.mozilla.org/show_bug.cgi?id=366283), Safari 4
and Chrome 3 fail them all.

(Looking at the spec quickly now, I don't see anything that actually
states this explicitly - the only reference to infinite transparent
black bitmaps is when drawing shadows. But
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#drawing-model
is phrased in terms of rendering shapes onto an image, then
compositing the image within the clipping region, so I believe it is
meant to work as I said (and definitely not by compositing only within
the extent of the shape drawn onto the image).)

-- 
Philip Taylor
exc...@gmail.com


Re: [whatwg] No interface flicker across page loads, without JavaScript (was: framesets)

2009-10-16 Thread Tab Atkins Jr.
On Fri, Oct 16, 2009 at 8:50 AM, Aryeh Gregor  wrote:
> On Fri, Oct 16, 2009 at 7:16 AM, Markus Ernst  wrote:
>> Interesting idea! Anyway it introduces some consistency problems to solve,
>> e.g.:
>>
>> Page1.html contains:
>>
>> I eat meat
>>
>> and links to page2.html, which contains:
>>
>> I am a vegetarian
>>
>> So page2.html looks different whether it is called from the link in
>> page1.html, or directly via a bookmark, external link, or manual URI input.
>
> Well, certainly impose a same-origin restriction on preservation of
> .  Then it would just be a problem of one site being
> inconsistent with itself.  But I don't think this is a bug, it's a
> feature.  One of the major advantages of frames is you can manipulate
> each piece independently, and not have your changes lost on
> navigation.  If a script changes the contents of the  after it
> was created, those changes *should* be required to persist on page
> load.

Indeed, script changes should persist.  The problem he was
highlighting, though, was the fact that a 'site bug' like that would
be very easy to have happen accidentally.  It could even go unnoticed
by the site developers, if they always come in through the front page
and the content is correct there - only users following search engine
links or bookmarks deep into the site would see the obsolete content,
and it would *never go away* during that browsing session.

This error seems like it would be very easy to make.

As well, this still doesn't answer the question of what to do with
script links between the static content and the original page, like
event listeners placed on content within the .  Do they get
preserved?  How would that work?  If they don't, then some of the
benefit of 'static' content is lost, since it will be inoperable for a
moment after each pageload while the JS reinitializes.

> An alternative idea would be to dispense with id's, and key off a hash
> of the literal string contents of the  instead, in the
> serialized document passed over the wire.  Bandwidth savings could
> then be obtained using  or some similar
> syntax, with the UA passing the hashes instead of id's in a header.
> This way, the element would auto-update if the contents changed on the
> server side, but not on the client side.

I would hope that authors never did that!  That means that if a user
deeplinks straight into the site, they'll get the empty element.  The
hash won't help them, since it's their first pageview.  *Hopefully*
they'll swing by a page that has the actual contents and the hashfail
would trigger an update, but that's not a guarantee, and in the
meantime they have an empty element there.

> On the other hand, if they did change it would lose all the user's
> changes, if any.  But you can't rely on the changes being present
> after page reload anyway, if the element has been changed, so maybe
> this is noncritical.  It depends what exactly this would be used for.

I think being updated is more important than persisting changes to
(now out-of-date) content.

> A slightly different use-case would be a dynamic application like
> Gmail, rewritten without AJAX.  The bar on the left contains things
> like "Inbox (2)", which are updated by script.  In this case, if new
> contents were loaded from the server, the server or script would
> promptly fill in the appropriate numbers and so on.  So again, this
> use-case doesn't seem to care much if changes are thrown out.

One of the big reasons Gmail is so AJAXy is because of the heavy
script lifting it has to do on each page load.  AJAX lets them persist
the script while updating the content.   wouldn't help with
that.

> Another case to consider is where you have a tree or something that
> gets uncollapsed depending on what page you're on.  This seems like a
> case where you'd actually want something slightly different: the new
> version should load, just without flickering.  Perhaps a cruder
> solution would be useful, which doesn't affect display of the new page
> but only how new elements get loaded -- specifically, allowing a mix
> of content from the old and new page to exist until the new page is
> fully painted.  I'm not sure how that would work.  The sort of
> compression I suggested in  could probably be better handled
> by SDCH or something.

The new page can just js-manipulate the static element.  If you're not
happy with that, then you really *do* need the bits to reload with the
page, and shouldn't be using .

>> This could be solved if "static" elements have no content on their own, but
>> retrieve it from an external source. The identifyer is then not the id
>> attribute, but the source. This could be done with a src attribute on the
>>  element. But I assume an easier implementation would be adding a
>> "static" attribute for the  element, indicating that the iframe
>> contents should not be reloaded.
>
> I don't like this solution, because it complicates things for authors.
>  You have to make separate pages for each interfac

Re: [whatwg] No interface flicker across page loads, without JavaScript (was: framesets)

2009-10-16 Thread Aryeh Gregor
On Fri, Oct 16, 2009 at 7:16 AM, Markus Ernst  wrote:
> Interesting idea! Anyway it introduces some consistency problems to solve,
> e.g.:
>
> Page1.html contains:
>
> I eat meat
>
> and links to page2.html, which contains:
>
> I am a vegetarian
>
> So page2.html looks different whether it is called from the link in
> page1.html, or directly via a bookmark, external link, or manual URI input.

Well, certainly impose a same-origin restriction on preservation of
.  Then it would just be a problem of one site being
inconsistent with itself.  But I don't think this is a bug, it's a
feature.  One of the major advantages of frames is you can manipulate
each piece independently, and not have your changes lost on
navigation.  If a script changes the contents of the  after it
was created, those changes *should* be required to persist on page
load.

An alternative idea would be to dispense with id's, and key off a hash
of the literal string contents of the  instead, in the
serialized document passed over the wire.  Bandwidth savings could
then be obtained using  or some similar
syntax, with the UA passing the hashes instead of id's in a header.
This way, the element would auto-update if the contents changed on the
server side, but not on the client side.

On the other hand, if they did change it would lose all the user's
changes, if any.  But you can't rely on the changes being present
after page reload anyway, if the element has been changed, so maybe
this is noncritical.  It depends what exactly this would be used for.

The obvious use case here would just be to keep navigation elements
fixed.  For instance, on http://en.wikipedia.org/wiki/, most of  could be .  (With a few exceptions, like .)  Navigation tends not to be very interact-able, so
reloading it and throwing out client-side changes would be fine if it
changes on the server side.

A slightly different use-case would be a dynamic application like
Gmail, rewritten without AJAX.  The bar on the left contains things
like "Inbox (2)", which are updated by script.  In this case, if new
contents were loaded from the server, the server or script would
promptly fill in the appropriate numbers and so on.  So again, this
use-case doesn't seem to care much if changes are thrown out.

Another case to consider is where you have a tree or something that
gets uncollapsed depending on what page you're on.  This seems like a
case where you'd actually want something slightly different: the new
version should load, just without flickering.  Perhaps a cruder
solution would be useful, which doesn't affect display of the new page
but only how new elements get loaded -- specifically, allowing a mix
of content from the old and new page to exist until the new page is
fully painted.  I'm not sure how that would work.  The sort of
compression I suggested in  could probably be better handled
by SDCH or something.

> This could be solved if "static" elements have no content on their own, but
> retrieve it from an external source. The identifyer is then not the id
> attribute, but the source. This could be done with a src attribute on the
>  element. But I assume an easier implementation would be adding a
> "static" attribute for the  element, indicating that the iframe
> contents should not be reloaded.

I don't like this solution, because it complicates things for authors.
 You have to make separate pages for each interface widget, and it
entails more HTTP requests.  It's also not backwards-compatible --
you'll often get a big degradation in behavior if you use this in a
browser that doesn't support .   as I
envisioned it can be dropped into existing pages without requiring
them to be broken into separate files, or risking compatibility
problems.


Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Robert O'Callahan
On Sat, Oct 17, 2009 at 1:06 AM, Philip Taylor

> wrote:

> I think the spec is clear on this (at least when I last looked; not
> sure if it's changed since then). Image A is infinite and filled with
> transparent black, then you draw the shape onto it (with no
> compositing yet), and then you composite the whole of image A (using
> globalCompositeOperation) on top of the current canvas bitmap. With
> some composite operations that's a different result than if you only
> composited pixels within the extent of the shapes you drew onto image
> A.
>

Ah, so you mean Firefox is right in this case?

Rob
-- 
"He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all." [Isaiah
53:5-6]


Re: [whatwg] No interface flicker across page loads, without JavaScript (was: framesets)

2009-10-16 Thread Tab Atkins Jr.
On Fri, Oct 16, 2009 at 6:16 AM, Markus Ernst  wrote:
> Aryeh Gregor schrieb:
>>
>> On Thu, Oct 15, 2009 at 3:49 AM, Nelson Menezes
>>  wrote:
>>>
>>> As an aside, there is a reason why AJAX has become so popular over the
>>> past few years: it solves the specific UI-reset issue that is inherent
>>> in full-page refreshes.
>>
>> I'm trying to think what a solution to this would look like.  Maybe
>> something like:
>>
>> Some stuff that doesn't change on page load...
>> Changeable page content
>> Some more stuff that doesn't change...
>
> Interesting idea! Anyway it introduces some consistency problems to solve,
> e.g.:
>
> Page1.html contains:
>
> I eat meat
>
> and links to page2.html, which contains:
>
> I am a vegetarian
>
> So page2.html looks different whether it is called from the link in
> page1.html, or directly via a bookmark, external link, or manual URI input.

Nod.  This seems like a big problem.

> This could be solved if "static" elements have no content on their own, but
> retrieve it from an external source. The identifyer is then not the id
> attribute, but the source. This could be done with a src attribute on the
>  element. But I assume an easier implementation would be adding a
> "static" attribute for the  element, indicating that the iframe
> contents should not be reloaded.

As well, if  is reused, it should probably automatically be
 so that all navigation applies to the upper page, it
grabs styles from the upper page, etc.  (Or perhaps it should just be
recommended that  be used in most
circumstances.)

The  solution is also somewhat better wrt scripting the
content inside.  If you're trying not to redraw anything in ,
what happens to scripts that have added listeners and such to the
content?  (Frex, to implement an accordion or treeview.)  The original
page is going away, and you don't want to accidentally apply the
listeners multiple times.  Using s, you can do the scripting
in the framed page, so nothing goes away between pageloads or tries to
apply itself multiple times.

~TJ


Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-16 Thread Philip Taylor
On Fri, Oct 16, 2009 at 2:41 AM, Charles Pritchard  wrote:
> Having gone back and forth with Robert a bit: I was able to recall the whys
> of a particular issue
> that could be handled in this version of the spec, regarding compositing.
>
> As far as I can tell; the area (width and height, extent) of source image A
> [4.8.11.13 Compositing]
> when source image A is a shape, is not defined by the spec.
>
> And so in Chrome, when composting with a shape, the extent of image A is
> only that width
> and height the shape covers, whereas in Firefox, the extent of image A is
> equivalent to the
> extent of image B (the current bitmap). This led to an incompatibility
> between the two browsers.

I think the spec is clear on this (at least when I last looked; not
sure if it's changed since then). Image A is infinite and filled with
transparent black, then you draw the shape onto it (with no
compositing yet), and then you composite the whole of image A (using
globalCompositeOperation) on top of the current canvas bitmap. With
some composite operations that's a different result than if you only
composited pixels within the extent of the shapes you drew onto image
A.

(With most composite operations it makes no visible difference,
because compositing transparent black onto a bitmap has no effect, so
this only affects a few unusual modes.)

There is currently no definition of what the "extent" of a shape is
(does it include transparent pixels? shadows? what about text with a
bitmap font? etc), and it sounds like a complicated thing to define
and to implement interoperably, and I don't see obvious benefits to
users, so the current specced behaviour (using infinite bitmaps, not
extents) seems to me like the best approach (and we just need everyone
to implement it).

-- 
Philip Taylor
exc...@gmail.com


Re: [whatwg] No interface flicker across page loads, without JavaScript (was: framesets)

2009-10-16 Thread Markus Ernst

Aryeh Gregor schrieb:

On Thu, Oct 15, 2009 at 3:49 AM, Nelson Menezes
 wrote:

As an aside, there is a reason why AJAX has become so popular over the
past few years: it solves the specific UI-reset issue that is inherent
in full-page refreshes.


I'm trying to think what a solution to this would look like.  Maybe
something like:

Some stuff that doesn't change on page load...
Changeable page content
Some more stuff that doesn't change...


Interesting idea! Anyway it introduces some consistency problems to 
solve, e.g.:


Page1.html contains:

I eat meat

and links to page2.html, which contains:

I am a vegetarian

So page2.html looks different whether it is called from the link in 
page1.html, or directly via a bookmark, external link, or manual URI input.


This could be solved if "static" elements have no content on their own, 
but retrieve it from an external source. The identifyer is then not the 
id attribute, but the source. This could be done with a src attribute on 
the  element. But I assume an easier implementation would be 
adding a "static" attribute for the  element, indicating that 
the iframe contents should not be reloaded.


--
Markus


Re: [whatwg] behavior

2009-10-16 Thread Michael A. Puls II

On Fri, 16 Oct 2009 06:19:04 -0400, Simon Pieters  wrote:

On Fri, 16 Oct 2009 12:10:35 +0200, Michael A. Puls II  
 wrote:



On Fri, 16 Oct 2009 05:28:46 -0400, Ian Hickson  wrote:


There was also some discussion of what to do about preventing a plugin
instantiating. It seems to me that authors can do that by not creating  
the

 element ahead of time.


And, if it's desired to specify the  via parsed markup (as  
opposed to doing it all with JS), one can omit @type and @data so  
things don't load and add them later like so:


data-load-on-demand-type="application/x-java-applet" id="test">

 
 fallback



 window.onload = function() {
 var obj = document.getElementById("test");
 obj.style.display = "inline-block";
 obj.type = obj.dataset["load-on-demand-type"];
 alert("Come alive! Hide your fallback! I command you!");
 };



"One or both of the data and type attributes must be present." says the  
spec.


Oops. Right. Forgot about that.

 doesn't seem to have the same requirement for src and type.  
(Also compare with img, iframe, video...)





--
Michael


[whatwg] No interface flicker across page loads, without JavaScript (was: framesets)

2009-10-16 Thread Aryeh Gregor
On Thu, Oct 15, 2009 at 3:49 AM, Nelson Menezes
 wrote:
> As an aside, there is a reason why AJAX has become so popular over the
> past few years: it solves the specific UI-reset issue that is inherent
> in full-page refreshes.

I'm trying to think what a solution to this would look like.  Maybe
something like:

Some stuff that doesn't change on page load...
Changeable page content
Some more stuff that doesn't change...

The semantics would be that when the browser loaded the new page, it
would do something like

1) Retrieve the URL.
2) Start parsing the new page.  When the time comes to clear the
screen so it can be redrawn for the new page, leave any 
elements untouched, so they don't flicker or vanish.
3) When parsing the page, if a  element is reached that has
the same id as a  element that was on the old page, ignore the
contents of the new one.  Instead, move the old  element to
the position of the new one, copying its DOM.  If possible, this
shouldn't cause the visible  element to flicker or be redrawn,
if it's visible.  There should be some reasonable de facto or de jure
conditions where no-flicker is guaranteed, e.g., all applicable styles
are the same and the element is absolutely positioned relative to the
body.

As an added optimization, the browser could send an HTTP request
header like "Static-IDs" containing a list of the IDs of all 
elements currently on the page, so that the server can just leave
those empty.  A  tag might be useful too, to indicate
that specific parts of a  element might indeed change -- in
this case the  element might have to be redrawn, but only once
the new  element was fully parsed, not before.

I doubt this is suitable for HTML5, given how far along that is, but
it might be interesting to consider anyway.  Does the idea sound
interesting to anyone else?


Re: [whatwg] behavior

2009-10-16 Thread Simon Pieters
On Fri, 16 Oct 2009 12:10:35 +0200, Michael A. Puls II  
 wrote:



On Fri, 16 Oct 2009 05:28:46 -0400, Ian Hickson  wrote:


There was also some discussion of what to do about preventing a plugin
instantiating. It seems to me that authors can do that by not creating  
the

 element ahead of time.


And, if it's desired to specify the  via parsed markup (as  
opposed to doing it all with JS), one can omit @type and @data so things  
don't load and add them later like so:


data-load-on-demand-type="application/x-java-applet" id="test">

 
 fallback



 window.onload = function() {
 var obj = document.getElementById("test");
 obj.style.display = "inline-block";
 obj.type = obj.dataset["load-on-demand-type"];
 alert("Come alive! Hide your fallback! I command you!");
 };



"One or both of the data and type attributes must be present." says the  
spec.


 doesn't seem to have the same requirement for src and type. (Also  
compare with img, iframe, video...)


--
Simon Pieters
Opera Software


Re: [whatwg] behavior

2009-10-16 Thread Michael A. Puls II

On Fri, 16 Oct 2009 05:28:46 -0400, Ian Hickson  wrote:


There was also some discussion of what to do about preventing a plugin
instantiating. It seems to me that authors can do that by not creating  
the

 element ahead of time.


And, if it's desired to specify the  via parsed markup (as opposed  
to doing it all with JS), one can omit @type and @data so things don't  
load and add them later like so:


data-load-on-demand-type="application/x-java-applet" id="test">


fallback



window.onload = function() {
var obj = document.getElementById("test");
obj.style.display = "inline-block";
obj.type = obj.dataset["load-on-demand-type"];
alert("Come alive! Hide your fallback! I command you!");
};


--
Michael


Re: [whatwg] Workers and addEventListener

2009-10-16 Thread Zoltan Herczeg
>> I would not be opposed to changing the spec to include enabling a port's
>> message queue when addEventListener("message") is invoked.
>
> I'm reluctant to make addEventListener() do magic.

we have two choices:
  - extend addEventListener
  - fix the Shared Worker example on the whatwg site to call start()

seems the latter one was preferred by the majority of the people. Ian,
could you do this fix?

Thanks,
Zoltan




Re: [whatwg] functionality absorbed into ?

2009-10-16 Thread Kornel Lesiński

On 9 Oct 2009, at 09:18, Ian Hickson wrote:


For example, the W3C copy of HTML5 says:

  HTML5
  A vocabulary and associated APIs for HTML and XHTML
  Editor's Draft 9 October 2009
  ...
  Abstract



...which is what it would be interpreted as. This is what is meant:

  
   HTML5;
   A vocabulary and associated APIs for HTML and XHTML;
   Editor's Draft 9 October 2009


If that's what you mean, why not write it this way?

   HTML5
  
   A vocabulary and associated APIs for HTML and XHTML
   Editor's Draft 9 October 2009

Your version with split  seems to use it only for visual effect.


I still think that  (subheader, tagline) would be just as  
effective, less confusing and less likely to break outline when used  
improperly...


--
regards, Kornel Lesiński



Re: [whatwg] behavior

2009-10-16 Thread Ian Hickson
On Thu, 3 Sep 2009, Henri Sivonen wrote:
> On Sep 3, 2009, at 00:39, Ian Hickson wrote:
> > > 
> > > 2. Its element must not be set to display of 'none' (and therefore 
> > > must not be part of fallback content that's not triggered yet).
> > 
> > This is definitely a bug; the fallback handling is done in a different 
> > way in HTML5, anyway.
> 
> Why is this a bug in browser behavior as opposed a bug in the spec?

Because as far as I can tell there's no reason a plugin's behaviour should 
depend on whether it is visible or not. And because Boris said so. :-)


On Tue, 15 Sep 2009, Boris Zbarsky wrote:
> Ian Hickson wrote:
> > > Since the whole point of text/plain sniffing is a workaround around 
> > > a known issue where content is reliably mis-marked as text/plain, 
> > > and since in this case there is a source of MIME information that's 
> > > more reliable than that, it's not clear to me why we want to 
> > > continue sniffing.
> > > 
> > > Of course if there is no @type there is no problem; I'm specifically 
> > > concerned about the @type="text/plain" case here.
> > 
> > What exactly are you proposing here?
> > 
> >  - Always honour type="" if it's a UA-supported type, ignoring server-
> > provided content-type?
> >  - Always honour type="" without sniffing if it matches the server-
> > provided content-type, even if normally that type would be sniffed?
> >  - Just honour type="text/plain" regardless of the server type, but for
> >other UA-supported type=""s, use the server type?
> 
> My suggestion is to only perform text/plain "is this text or binary" 
> sniffing where it belongs: on the HTTP level; since it's a workaround 
> for a particular HTTP server bug.  It shouldn't affect other type 
> metadata.

Ah, I see. So your concern is with the case where  is specified, if the user has a plugin that 
supports text/plain, and "x" contains data that is invalid in text/plain 
content. In this case, you would like the plugin to be invoked, even if 
sniffing the content would find that it was in fact some other format 
(e.g. flash).

That seems reasonable. I've changed the spec to prevent sniffing in this 
case.


> Perform the sniffing such that it detects as either text/plain or 
> application/octet-stream.
> 
> Then if it's application/octet-stream we'll end up using the @type. 
> Though see below on other sniffing issues.
> 
> This does fail to sniff text/plain as the various "non-scriptable" 
> types, but I question how desirable that is anyway, honestly.  If we 
> want to preserve this property without clobbering @type="text/plain" 
> then I need to think a bit more about how to specify the behavior here.

I'm a bit concerned about introducing even more sniffing algorithms (which 
this effectively is) rather than just reusing the existing ones. Why would 
we not want text/plain to be treated more or less the same here as in an 
?


> > > My concern about text/plain data being sniffed as text/html by your 
> > > current algorithm (even with the changes you've made) seems to 
> > > remain unaddressed.
> > 
> > I thought I had. Can you walk me through how anything labeled 
> > text/plain could get sniffed as text/html with the new text?
> 
> Hmm.  Assume the type attribute is not set and HTML data is sent as 
> text/plain and contains a "binary byte" in the first 512 bytes (can just 
> stick it in the  or something).  Also assume no plug-in claims to 
> support the URI's file extension.
> 
> At step 3, the resource type is set to text/plain.
> 
> At step 4, the resource type is sniffed as application/octet-stream, since
> text/html is marked as scriptable in [MIMESNIFFF].
> 
> At step 5, there is no @type, and the resource type is
> application/octet-stream, so the resource type is changed to unknown.
> 
> At step 6, nothing changes since there is no plug-in supporting the URI's file
> extension.
> 
> At step 7, the resource type is "unknown", so it is changed to the "sniffed
> type of the resource".

Ooh, yes. good catch. Hm.

I've forced it to text/plain in this case. (To be precise, I've changed 
the algorithm slightly so that you only do the sniffing once -- either the 
text-v-binary sniff, or the full sniff, and if the text-v-binary sniff 
just says application/octet-stream but the extension doesn't help, then I 
convert it back to text/plain.)

Note that in practice this makes no difference, unless there is a plugin 
that supports text/html or text/plain. If there is no such plugin, then 
the end result is that a browsing context is created, and the resource is 
treated as normal (including sniffing it again properly).


On Sun, 20 Sep 2009, Michael A. Puls II wrote:
> 
> O.K., so put simply, HTML5 should explicitly mention that the css 
> display property for ,  (and  in the handling 
> section) has absolutely no effect on plug-in instantiation and 
> destroying and has absolutely no effect on @src and @data resource 
> fetching.
> 
> HTML5 could also be extra clear by example that display: