Re: [whatwg] Features for responsive Web design
On Oct 9, 2012, at 2:49 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Tue, Oct 9, 2012 at 11:48 AM, Ian Hickson i...@hixie.ch wrote: On Tue, 9 Oct 2012, Mark Callow wrote: On 2012/10/06 7:09, Ian Hickson wrote: I agree, when there's 3x displays, this could get to the point where we need to solve it. :-) With the current displays, it's just not that big a deal, IMHO. If by 3x you mean displays whose dpi is 3x that of CSS pixels (96dpi), they already exist in retail products. I saw 2 last week. Can you elaborate? How many device pixels per CSS pixel do browsers on those devices use? Are they just making CSS pixels smaller, or are they actually using 3x? http://www.zdnet.com/google-nexus-10-tablet-to-have-higher-res-display-than-ipad-705466/ appears to be 299dpi http://www.iclarified.com/entry/index.php?enid=3 appears to be 440dpi These devices aren't out yet, but I suspect browsers would be more-or-less as high-dpi as possible. This page lists several devices with physical DPI higher than 288 (3x the nominal CSS dpi) but none with a CSS pixel ratio greater than 2x. (To be fair, the data is incomplete and may be inaccurate, though to my knowledge the entries for Apple devices are all correct). So it's not a given that the cited hardware dpi values would lead to higher CSS pixel ratios in the corresponding software. Regards, Maciej
Re: [whatwg] Features for responsive Web design
On Oct 10, 2012, at 4:14 AM, Maciej Stachowiak m...@apple.com wrote: On Oct 9, 2012, at 2:49 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Tue, Oct 9, 2012 at 11:48 AM, Ian Hickson i...@hixie.ch wrote: On Tue, 9 Oct 2012, Mark Callow wrote: On 2012/10/06 7:09, Ian Hickson wrote: I agree, when there's 3x displays, this could get to the point where we need to solve it. :-) With the current displays, it's just not that big a deal, IMHO. If by 3x you mean displays whose dpi is 3x that of CSS pixels (96dpi), they already exist in retail products. I saw 2 last week. Can you elaborate? How many device pixels per CSS pixel do browsers on those devices use? Are they just making CSS pixels smaller, or are they actually using 3x? http://www.zdnet.com/google-nexus-10-tablet-to-have-higher-res-display-than-ipad-705466/ appears to be 299dpi http://www.iclarified.com/entry/index.php?enid=3 appears to be 440dpi These devices aren't out yet, but I suspect browsers would be more-or-less as high-dpi as possible. This page lists several devices with physical DPI higher than 288 (3x the nominal CSS dpi) but none with a CSS pixel ratio greater than 2x. (To be fair, the data is incomplete and may be inaccurate, though to my knowledge the entries for Apple devices are all correct). So it's not a given that the cited hardware dpi values would lead to higher CSS pixel ratios in the corresponding software. No, but we do know that things are continuing to trend towards higher PPI displays, and that at some point that may lead to a higher CSS pixel ratio. As a member of the jQuery Mobile project I’ve seen this for myself with the test devices we’re receiving constantly—every new screen is sharper than the last. In fairness, no, we can’t predict the future one way or the other. That’s exactly why it’s better to plan for it than not.
[whatwg] Encoding: API
Hey, I was wondering whether it would make sense to define http://wiki.whatwg.org/wiki/StringEncoding as part of http://encoding.spec.whatwg.org/ Tying them together makes sense to me anyway and is similar to what we do with URL, HTML, etc. As for the open issue, I think it would make sense if the encoding's name was returned. Label is just some case-insensitive keyword to get there. I also still think it's kinda yucky that this API has this gigantic hack around what the rest of the platform does with respect to the byte order mark. It seems really weird to not expose the same encode/decode that HTML/XML/CSS/etc. use. -- http://annevankesteren.nl/
Re: [whatwg] Encoding: API
On Wed, Oct 10, 2012 at 6:42 AM, Anne van Kesteren ann...@annevk.nl wrote: Hey, I was wondering whether it would make sense to define http://wiki.whatwg.org/wiki/StringEncoding as part of http://encoding.spec.whatwg.org/ Tying them together makes sense to me anyway and is similar to what we do with URL, HTML, etc. No objection from me. As for the open issue, I think it would make sense if the encoding's name was returned. Label is just some case-insensitive keyword to get there. I tend to agree, as the label gives you no information you don't already have and the name can be at least a diagnostic. I also still think it's kinda yucky that this API has this gigantic hack around what the rest of the platform does with respect to the byte order mark. It seems really weird to not expose the same encode/decode that HTML/XML/CSS/etc. use. IMHO the API needs to support use cases: (1) code that wants to follow the behavior of the web platform with respect to legacy content (i.e. the desire to self-host), and (2) code that wants to parse files that are not traditionally web data, i.e. fragments of binary files, which don't have legacy behavior and where BOM taking priority would be surprising to developers. For #2, following the behavior of APIs like ICU with respect to BOMs is more sensible. I believe #2 is higher priority as long as it does not preclude #1, and #1 can be achieved by code that inspects the stream before handing it off to the decoder. Practically speaking, this would mean refactoring the combined spec so that the current BOM handling is defined for parsing web content outside of the API rather than requiring the API to hack around it. ... While we're here, any feedback from implementers? Mozilla is apparently quite far along. Any surprises or additional issues? Any initial feedback from users? I received feedback recently that the API is perhaps too terse right now when dealing with streaming content, and a more explicit decode(), decodeStream(), resetStream() might be more intelligible. Thoughts?
Re: [whatwg] Features for responsive Web design
On Wed, 10 Oct 2012, Mathew Marquis wrote: In fairness, no, we can’t predict the future one way or the other. That’s exactly why it’s better to plan for it than not. That's actually exactly why it's better _not_ to plan for it. We can't design features for problems we don't understand. It's better to wait until we have real problems before fixing them. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Features for responsive Web design
That's actually exactly why it's better _not_ to plan for it. We can't design features for problems we don't understand. It's better to wait until we have real problems before fixing them. You may not be able to predict every future problem, but surely you need to keep it in mind as you create solutions for today, right? For example, if all it takes is one higher resolution or one more feature to come along before a solution becomes unwieldy doesn't that imply the solution isn't a particularly strong one and is instead merely a stopgap? We can't be too bold with our predictions, but we do have to build with the future in mind or else condemn ourselves to a perpetual game of catch-up. Take care, Tim - http://twitter.com/tkadlec http://timkadlec.com http://implementingresponsivedesign.com
[whatwg] Control and Undefined Characters
The spec states: Any occurrences of any characters in the ranges U+0001 to U+0008, U+000E to U+001F, U+007F to U+009F, U+FDD0 to U+FDEF, and characters U+000B, U+FFFE, U+, U+1FFFE, U+1, U+2FFFE, U+2, U+3FFFE, U+3, U+4FFFE, U+4, U+5FFFE, U+5, U+6FFFE, U+6, U+7FFFE, U+7, U+8FFFE, U+8, U+9FFFE, U+9, U+AFFFE, U+A, U+BFFFE, U+B, U+CFFFE, U+C, U+DFFFE, U+D, U+EFFFE, U+E, U+E, U+F, U+10FFFE, and U+10 are parse errors. These are all control characters or permanently undefined Unicode characters (noncharacters). Additionally character references for these codepoints also will return these unicode characters. Therefore these characters are passed to the tree construction stage as far as I can tell. And I so no handling of them in the tree contruction. Elsewhere in the specification it says: Text nodes and attribute values must consist of Unicode characters, must not contain U+ characters, must not contain permanently undefined Unicode characters (noncharacters), and must not contain control characters other than space characters. And testing in Firefox and Chrome it appears these characters are ignored. But I see no mention of this anywhere to ignore them or how to handle them. Is this a bug with the specification?
Re: [whatwg] Control and Undefined Characters
On Thu, 11 Oct 2012, Cameron Zemek wrote: The spec states: Any occurrences of any characters in the ranges U+0001 to U+0008, U+000E to U+001F, U+007F to U+009F, U+FDD0 to U+FDEF, and characters U+000B, U+FFFE, U+, U+1FFFE, U+1, U+2FFFE, U+2, U+3FFFE, U+3, U+4FFFE, U+4, U+5FFFE, U+5, U+6FFFE, U+6, U+7FFFE, U+7, U+8FFFE, U+8, U+9FFFE, U+9, U+AFFFE, U+A, U+BFFFE, U+B, U+CFFFE, U+C, U+DFFFE, U+D, U+EFFFE, U+E, U+E, U+F, U+10FFFE, and U+10 are parse errors. These are all control characters or permanently undefined Unicode characters (noncharacters). Additionally character references for these codepoints also will return these unicode characters. Therefore these characters are passed to the tree construction stage as far as I can tell. And I so no handling of them in the tree contruction. Elsewhere in the specification it says: Text nodes and attribute values must consist of Unicode characters, must not contain U+ characters, must not contain permanently undefined Unicode characters (noncharacters), and must not contain control characters other than space characters. All these requirements relate to authoring conformance criteria and validators. User agents are required to treat U+0001 the same as, say, A. And testing in Firefox and Chrome it appears these characters are ignored. But I see no mention of this anywhere to ignore them or how to handle them. Do you have a test case demonstrating this? When I tested it it seemed like the characters were not ignored: http://software.hixie.ch/utilities/js/live-dom-viewer/?saved=1824 (This test is testing whether a U+0001 is lost either in the JS parser, document.write(), the HTML tokeniser, the HTML parser, the DOM API, or the JS string API, and it seems to get through all of those fine.) -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Control and Undefined Characters
On Thu, Oct 11, 2012 at 9:07 AM, Ian Hickson i...@hixie.ch wrote: User agents are required to treat U+0001 the same as, say, A. Yeah that is how I understood the specification. And testing in Firefox and Chrome it appears these characters are ignored. But I see no mention of this anywhere to ignore them or how to handle them. Do you have a test case demonstrating this? When I tested it it seemed like the characters were not ignored: http://software.hixie.ch/utilities/js/live-dom-viewer/?saved=1824 Oh nevermind me. Saving the page and using a hex editor I see that it still is there. Sorry I was inspecting the DOM and the control characters are invisible characters (early in the morning and my brain is not functioning correctly). So section '3.2.5.1.5 Phrasing content' is only referring to conforming documents when it mentions that text nodes and attribute values should not include control characters other than space characters?
Re: [whatwg] Control and Undefined Characters
On Thu, 11 Oct 2012, Cameron Zemek wrote: So section '3.2.5.1.5 Phrasing content' is only referring to conforming documents when it mentions that text nodes and attribute values should not include control characters other than space characters? It's refering to the conformance rules for documents, yes. This section in the intro talks about this issue: http://www.whatwg.org/specs/web-apps/current-work/multipage/introduction.html#how-to-read-this-specification HTH, -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Encoding: API
On Wed, Oct 10, 2012 at 6:42 AM, Anne van Kesteren ann...@annevk.nl wrote: Hey, I was wondering whether it would make sense to define http://wiki.whatwg.org/wiki/StringEncoding as part of http://encoding.spec.whatwg.org/ Tying them together makes sense to me anyway and is similar to what we do with URL, HTML, etc. As for the open issue, I think it would make sense if the encoding's name was returned. Label is just some case-insensitive keyword to get there. I also still think it's kinda yucky that this API has this gigantic hack around what the rest of the platform does with respect to the byte order mark. It seems really weird to not expose the same encode/decode that HTML/XML/CSS/etc. use. Also, Firefox 18 as well as recent nightlies implement this draft. It should be matching the draft pretty closely, though I know there is some weirdness in handling invalid input for some encodings when doing decoding. This is also hooked up to the generic Gecko decoder backend which means that the decoder doesn't yet support the exact set of encodings as defined by the http://encoding.spec.whatwg.org/ spec. / Jonas
Re: [whatwg] Encoding: API
On Wed, Oct 10, 2012 at 7:28 PM, Joshua Bell jsb...@chromium.org wrote: On Wed, Oct 10, 2012 at 6:42 AM, Anne van Kesteren ann...@annevk.nl wrote: I also still think it's kinda yucky that this API has this gigantic hack around what the rest of the platform does with respect to the byte order mark. It seems really weird to not expose the same encode/decode that HTML/XML/CSS/etc. use. IMHO the API needs to support use cases: (1) code that wants to follow the behavior of the web platform with respect to legacy content (i.e. the desire to self-host), and (2) code that wants to parse files that are not traditionally web data, i.e. fragments of binary files, which don't have legacy behavior and where BOM taking priority would be surprising to developers. For #2, following the behavior of APIs like ICU with respect to BOMs is more sensible. I believe #2 is higher priority as long as it does not preclude #1, and #1 can be achieved by code that inspects the stream before handing it off to the decoder. Practically speaking, this would mean refactoring the combined spec so that the current BOM handling is defined for parsing web content outside of the API rather than requiring the API to hack around it. You would still get the hack because the API requires special treatment for utf-16. Given that per Unicode utf-16le and utf-16be outlaw the BOM, maybe a good solution would be a flag to disable BOM handling as seen by the decode algorithm? So the decoder gets a disableBOM flag that defaults to false? That would only require a special case for BOM handling on top of what there is today, which seems a fair bit cleaner. I received feedback recently that the API is perhaps too terse right now when dealing with streaming content, and a more explicit decode(), decodeStream(), resetStream() might be more intelligible. Thoughts? Either way works for me. -- http://annevankesteren.nl/