Re: Call for Editor: URL spec

2012-11-06 Thread Ms2ger

On 11/06/2012 08:02 AM, Adam Barth wrote:

Does the WebApps Working Group plan do either of these things?

A) Put in technical effort to improve the specification


Unlikely.


B) License the fork in such a way as to let me merge improvements into my copy


Definitely not.

HTH
Ms2ger



Re: Pre-fetch rough draft

2012-11-06 Thread Sergey Nikitin

On 05.11.2012, at 16:28, Julian Reschke wrote:

 
 Yes. Exactly.
 It's not about offline apps, it's about reducing loading time.
 
 There's already the prefetch link relation that you could use.
 

You need at least two pages to start prefetching.
And you can't prefetch anything for the first page.
If you have single page application you can't prefetch.

And it's not always possible for browser to visit a page (cookie/password 
protected).
 

 Prefetch manifest is a way to tell browser what should be downloaded in 
 advance.
 So when user opens the site (for the first time) all resources 
 (css/js/images/...) are already cached.
 
 And if later site's resources are updated browser could check prefest 
 manifest and
 re-download all new resources in background. Before user visited site again.
 
 But then you don't need a manifest for that (see above).
 
 Best regards, Julian
 
 




Re: Call for Editor: URL spec

2012-11-06 Thread Paul Libbrecht
Ian,

Could be slightly more formal?
You are speaking of hypocrisy but this seems like a matter of politeness, 
right?
Or are you actually claiming that there's a license breach?

That there are different mechanisms at WHATWG and W3C is not really new.

Paul

Le 6 nov. 2012 à 02:42, Ian Hickson a écrit :

 In the meantime, W3C is copying Anne's work in several specs, to
 
 It seems like W3C groups copying WHATWG's work has been ongoing for 
 several years (so I think this is old news, especially since, AFAIU, 
 it is permissiable, perhaps even encouraged? via the WHATWG copyright) 
 ;-).
 
 In the past (and for some specs still today), it was done by the editor 
 (e.g. me), as dual-publication.
 
 What's new news now is that the W3C does this without the editor's 
 participation, and more importantly, while simultaneously decrying the 
 evils of forking specifications, and with virtually no credit to the 
 person doing the actual work.
 
 It's this hypocrisy that is new and notable.



Re: Call for Editor: URL spec

2012-11-06 Thread Ian Hickson
On Tue, 6 Nov 2012, Paul Libbrecht wrote:
 
 Could be slightly more formal?
 You are speaking of hypocrisy but this seems like a matter of politeness, 
 right?

I am just saying that the W3C claims to have certain values, but only 
applies those values to other people, not to itself. Specifically, the W3C 
says forking specifications is bad (and even goes out of its way to 
disallow it for its own), but then turns around and does it to other 
people's specifications.

hypocrysy (noun): The practice of claiming to have moral standards or 
beliefs to which one's own behavior does not conform; pretense.


I'm also claiming that when doing so, the W3C does not generally give 
credit where credit is due. For example, this document is basically 
written by Ms2ger:

   http://dvcs.w3.org/hg/innerhtml/raw-file/tip/index.html

Here's the version maintained by Ms2ger, for comparison (the only 
differences I could find were editorial style issues, not even text -- 
basically just that the doc has been converted from the anolis style to 
the respec style):

   http://domparsing.spec.whatwg.org/

The most Ms2ger gets is a brief mention in the acknowledgements almost at 
the very end of the document. The WebApps working group gets a whole 
sentence above the fold: This document was published by the Web 
Applications Working Group. The W3C has their logo right at the top and 
calls the draft a W3C Editor's Draft.

plagiarism (noun): The practice of taking someone else's work or ideas and 
passing them off as one's own.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Pre-fetch rough draft

2012-11-06 Thread Julian Reschke

On 2012-11-06 09:28, Sergey Nikitin wrote:


On 05.11.2012, at 16:28, Julian Reschke wrote:



Yes. Exactly.
It's not about offline apps, it's about reducing loading time.


There's already the prefetch link relation that you could use.



You need at least two pages to start prefetching.


Why two?


And you can't prefetch anything for the first page.


Yes, you can. Just use the first page's metadata instead of a separate 
prefetch manifest,



If you have single page application you can't prefetch.


Why?


And it's not always possible for browser to visit a page (cookie/password 
protected).


It's always possible to visit the page; it just needs to return the 
relevant header fields.



...


Best regards, Julian




Re: Call for Editor: URL spec

2012-11-06 Thread Melvin Carvalho
On 6 November 2012 09:46, Ian Hickson i...@hixie.ch wrote:

 On Tue, 6 Nov 2012, Paul Libbrecht wrote:
 
  Could be slightly more formal?
  You are speaking of hypocrisy but this seems like a matter of
 politeness, right?

 I am just saying that the W3C claims to have certain values, but only
 applies those values to other people, not to itself. Specifically, the W3C
 says forking specifications is bad (and even goes out of its way to
 disallow it for its own), but then turns around and does it to other
 people's specifications.

 hypocrysy (noun): The practice of claiming to have moral standards or
 beliefs to which one's own behavior does not conform; pretense.


 I'm also claiming that when doing so, the W3C does not generally give
 credit where credit is due. For example, this document is basically
 written by Ms2ger:

http://dvcs.w3.org/hg/innerhtml/raw-file/tip/index.html

 Here's the version maintained by Ms2ger, for comparison (the only
 differences I could find were editorial style issues, not even text --
 basically just that the doc has been converted from the anolis style to
 the respec style):

http://domparsing.spec.whatwg.org/

 The most Ms2ger gets is a brief mention in the acknowledgements almost at
 the very end of the document. The WebApps working group gets a whole
 sentence above the fold: This document was published by the Web
 Applications Working Group. The W3C has their logo right at the top and
 calls the draft a W3C Editor's Draft.

 plagiarism (noun): The practice of taking someone else's work or ideas and
 passing them off as one's own.


^^ (citation needed) :)



 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




Giving Credit Where Credit is Due [Was: Re: Call for Editor: URL spec]

2012-11-06 Thread Arthur Barstow

On 11/6/12 3:46 AM, ext Ian Hickson wrote:
  
the W3C does not generally give credit where credit is due.


This issue has been bothering me for a while, so thanks for raising it. 
I agree proper attribution is a problem that needs to be addressed in 
the WG's versions of these specs (URL, DOM4, etc.).


My sincere apologies to those Editors that are not getting appropriate 
credit for their contributions.


-Thanks, AB




W3C document license [Was: Re: Call for Editor: URL spec]

2012-11-06 Thread Arthur Barstow

On 11/6/12 3:23 AM, ext Ms2ger wrote:

On 11/06/2012 08:02 AM, Adam Barth wrote:

Does the WebApps Working Group plan do either of these things?

A) Put in technical effort to improve the specification


Unlikely.


My expectation is that public-webapps will continue to be one venue for 
comments about the spec.


If a credible Editor does not commit to moving the spec along the REC 
track, then I believe one option is for the WG to publish the spec. That 
is, don't specifically identify an Editor. Regardless, as I stated 
previously, proper attribution is needed.


B) License the fork in such a way as to let me merge improvements 
into my copy


Definitely not.


I am not aware of any changes nor impending changes to the W3C's 
Document license policy WebApps follows.


-AB







[Bug 19878] New: Revert change in Close-reason-unpaired-surrogates.htm ?

2012-11-06 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=19878

  Priority: P2
Bug ID: 19878
CC: kr...@microsoft.com, m...@w3.org,
public-webapps@w3.org
  Assignee: dave.n...@w3.org
   Summary: Revert change in Close-reason-unpaired-surrogates.htm
?
QA Contact: public-webapps-bugzi...@w3.org
  Severity: normal
Classification: Unclassified
OS: All
  Reporter: sim...@opera.com
  Hardware: PC
Status: NEW
   Version: unspecified
 Component: Test suite
   Product: WebAppsWG

http://dvcs.w3.org/hg/webapps/rev/d0ec1879951a

I don't understand why this change was made. The assert seems useful for the
purpose of finding impl bugs. Is it tested somewhere else? Should we revert the
change?

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: W3C document license [Was: Re: Call for Editor: URL spec]

2012-11-06 Thread Charles McCathie Nevile
On Tue, 06 Nov 2012 12:57:38 +0100, Arthur Barstow art.bars...@nokia.com  
wrote:



On 11/06/2012 08:02 AM, Adam Barth wrote:

Does the WebApps Working Group plan do either of these things?


B) License the fork in such a way as to let me merge improvements into  
my copy


I am not aware of any changes nor impending changes to the W3C's  
Document license policy WebApps follows.


Actually there are efforts being made to change the document license. But  
they are not in the scope of this group.


If you're a W3C member then you should already know how the organisation  
changes itself. If not, you're still welcome to e.g. the W3C Process  
Community Group, where among other people the CEO, and the current editor  
of the Process Document, actually listen to what people say and engage in  
discussion.


cheers

Chaals

--
Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex
  cha...@yandex-team.ru Find more at http://yandex.com



Re: Event.key complaints?

2012-11-06 Thread Кошмарчик
On Thu 1 Nov 2012, Hallvord R. M. Steen wrote:
 I would like the story of event.char and event.key to be that
 event.char describes the generated character (if any) in its
 shifted/unshifted/modified/localized glory while event.key describes
 the key (perhaps on a best-effort basis, but in a way that is at least
 as stable and usable as event.keyCode).

I think we're mostly in agreement here (except for being satisfied with
best-effort key codes ^_^).

 Hence, what I think would be most usable in the real world would be
 making event.key a mapping back to un-shifted character values of a
 normal QUERTY (en-US) layout. Authors are asking for stable reference
 values for identifying keys, and that's the most stable and widely
 known reference keyboard layout.

The main problem with using unmodified 'en-US' values is that it doesn't
define values for keys that are not found on US keyboards. So, it's great
for US keys, but completely ignores the rest of the world. And once you
start looking at adding support for these other keys, you find that things
aren't so simple.

This is why we proposed using the USB codes - it defines unique values that
cover pretty much every modern keyboard that's in use.

Consider the following problems with using 'en-US':
* What should the 'key' value be for the B00 key (located next to the
left shift - see the ISO/IEC9995-2 spec[1])? This is used in UK, Italian,
Spanish, French, German and many other keyboards.
* What should the 'key' value be for B11 key (next to the right shift)?
This is used on Brazilian and Japanese keyboards.
* And C12 (tucked under Enter key when Enter takes 2 rows)? Keyboards
with B00 usually have C12 as well.
* And E13 (between += and Backspace)? Found on Japanese (Yen key) and
Russian keyboards.
None of these keys exist on a US keyboard.

The USB codes solve all of these problems because they've already thought
about all the keyboard variation that exists in the world:
* For B00, the USB code = 0x64, name = Non-US \ and |.
* For B11, the USB code = 0x87, name = International1.
* For C12, the USB code = 0x32, name = Non-US # and ~.
* For E13, the USB code = 0x89, name = International3.

Simply specifying the 'key' codes as coming from the 'en-US' layout leaves
too many details undefined and (while it may be marginally better than the
legacy keyCode values) will most likely result in a lot of browser
variation for these non-US edge cases.

Because of these problems, specifying this field as containing unmodified
'en-US' key is not an adequate solution.

As for the solution we need to come up with, it doesn't matter to me
if it's:
* encoded in the current 'key' field, or in a new field (although it'd be
nice to have the 'key' field do the right thing).
* a numeric value or a string (although I think a numeric value is
preferable to avoid confusion with the 'char' value).
* the exact USB codes or something similar that we derive from them.

But, we do need it to:
* be able to uniquely identify each key on the keyboard.
* be useful for all keys on all keyboards (and not just those that map
nicely to 'en-US')
* be straightforward to implement on all platforms.

-Gary

[1] See http://en.wikipedia.org/wiki/ISO/IEC_9995


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-06 Thread Dimitri Glazkov
On Thu, Nov 1, 2012 at 8:39 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, Nov 1, 2012 at 2:43 PM, Maciej Stachowiak m...@apple.com wrote:
 On Nov 1, 2012, at 12:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, Nov 1, 2012 at 9:37 AM, Maciej Stachowiak m...@apple.com wrote:
 On Nov 1, 2012, at 12:02 AM, Dimitri Glazkov dglaz...@google.com wrote:
 Hi folks!

 While you are all having good TPAC fun, I thought I would bring this
 bug to your attention:

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=19562

 There's been several comments from developers about the fact that
 Shadow DOM encapsulation is _too_ well-sealed for various long tail,
 but important use cases

 What are these use cases? I did not seem them in the bug.

 http://w3cmemes.tumblr.com/post/34633601085/grumpy-old-maciej-has-a-question-about-your-spec

 For example, being able to re-render the page manually via DOM
 inspection and custom canvas painting code.  Google Feedback does
 this, for example.  If shadows are only exposed when the component
 author thinks about it, and then only by convention, this means that
 most components will be un-renderable by tools like this.

 As Adam Barth often points out, in general it's not safe to paint pieces of 
 a webpage into canvas without security/privacy risk. How does Google 
 Feedback deal with non-same-origin images or videos or iframes, or with 
 visited link coloring, to cite a few examples? Does it just not handle those 
 things?

 For the public/private part at least, this is just a switching of the
 defaults.  There was no good *reason* to be private by default, we
 just took the shortest path to *allowing* privacy and the default fell
 out of that.  As a general rule, we should favor being public over
 being private unless there's a good privacy or security reason to be
 private.  So, I don't think we need strong use-cases here, since we're
 not having to make a compat argument, and the new model adds minimal
 complexity.

 I don't enough of the context to follow this. Right now there's no good 
 general mechanism for a component exposing its guts, other than by 
 convention, right? It seems like adding a general mechanism to do so is a 
 good idea, but it could work with either default or with no default at all 
 and requiring authors to specify explicitly. I think specifying either way 
 explicitly would be best. JS lets you have public properties in an object, 
 or private properties (effectively) in a closure, so both options are 
 available and neither is default. It's your choice whether to use 
 encapsulation. I am not sure we need to specifically nudge web developers 
 away from encapsulation.

 I'm fine with specifying explicitly too.  The case we're trying to
 avoid is authors getting (unneeded) privacy by default, just because
 they did the easiest thing and didn't think too hard about it.

 The analogy with JS is somewhat telling - it's harder and less
 convenient to make private (closure) variables in JS. Even when we add
 Symbols to the language, it'll still be easier to just use normal
 (public) properties.


 6) The isolated setting essentially means that there's a new
 document and scripting context for this shadow subtree (specifics
 TBD). Watch https://www.w3.org/Bugs/Public/show_bug.cgi?id=16509 for 
 progress.

 That seems like a whole separate feature - perhaps we should figure out 
 private vs public first. It would be good to know the use cases for 
 this feature over using private or something like seamless iframes.

 Yeah, sure.  It's useful to bring up at the same time, though, because
 there are some decent use-cases that sound at first blush like they
 should be private, but really want even stronger security/isolation
 constraints.

 An existing example, iirc, is the Google +1 button widget.  Every
 single +1 includes an iframe so it can do some secure scripting
 without the page being able to reach in and fiddle with things.

 What are the advantages to using an isolated component for the +1 button 
 instead if an iframe, or a private component containing an iframe?

 I'm not 100% sure (Dimitri can answer better), but I think it's
 because we can do a somewhat more lightweight isolation than what a
 full iframe provides.

 IIRC, several of our use-cases *really* want all of the instances of a
 given component to use the same scripting context, because there's
 going to be a lot of them, and they all need the same simple data;
 they'd gain no benefit from being fully separate and paying the cost
 of a thousand unique scripting contexts.

Yup. The typical example that the Google+ people point out to me is
techcrunch.com. The count of iframes had gotten so high that it
affected performance to the point where the crunchmasters had to fake
the buttons (and reveal them on hover, which is tangential to the
story and may or may not have been the wisest choice).

With isolated shadow trees, the number of scripting contexts would
equal then number 

Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-06 Thread Dimitri Glazkov
On Thu, Nov 1, 2012 at 9:02 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/1/12 7:41 AM, Tab Atkins Jr. wrote:

 There was no good *reason* to be private by default


 Yes, there was.  It makes it much simpler to author non-buggy components.
 Most component authors don't really contemplate how their code will behave
 if someone violates the invariants they're depending on in their shadow
 DOMs.  We've run into this again and again with XBL.

 So pretty much any component that has a shadow DOM people can mess with but
 doesn't explicitly consider that it can happen is likely to be very broken.
 Depending on what exactly it does, the brokenness can be more or less
 benign, ranging from doesn't render right to leaks private user data to
 the world.


 As a general rule, we should favor being public over
 being private unless there's a good privacy or security reason to be
 private.


 As a general rule we should be making it as easy as possible to write
 non-buggy code, while still allowing flexibility.  In my opinion.

This has been my concern as well.

The story that made me sway is the elementFromPoint story. It goes
like this: we had an engineer come by and ask to add elementFromPoint
to ShadowRoot API.

... this is a short story with a happy ending
(https://www.w3.org/Bugs/Public/show_bug.cgi?id=18912), since
ShadowRoot hasn't shipped anywhere yet. However, imagine all browsers
ship Shadow DOM (oh glorious time), and there's a new cool DOM thing
that we haven't thought of yet. Without ability to get into shadow
trees and polyfill, we'll quickly see people throw nasty hacks at the
problem, like they always do (see one that Dominic suggested here:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15409#c5). And that
seems like a bad smell.

I am both excited and terrified.

Excited, because discovering Angelina Farro's talk
(http://www.youtube.com/watch?v=JNjnv-Gcpnw) makes me realize that
this Web Components thing is starting to catch on.

Terrified, because we gotta get this right. The Web is traditionally
very monkey-patchey and pliable and our strides to make the boundaries
hard will just breed perversion.

Anyhow. Elliott has made several passionate arguments for travsersable
shadow trees in person. Maybe he'll have a chance to chime in here.


:DG