Re: [whatwg] Order of popstate event and scroll restoration - interop issue

2015-08-14 Thread James Graham

On 11/08/15 15:08, Majid Valipour wrote:

According to HTML5 spec persisted user state (scroll, scale, form values,
etc)
should be restored before dispatching popstate event. (See steps 9 and 14 in
history traversal algorithm[1]).

Gecko and IE follow the spec order for scroll position but in Blink and
WebKit
the order is reversed specifically:
1. 'popstate' event dispatched
2.  scroll position restored  (only if user has not scrolled)
3. 'hashchanged' event dispatched (only if hash changed)


Do you have a testcase for this? It seems like something that should be 
added to the web-platform-tests repository. See [1] for details of the 
test format and submission process and ask me (or #testing on w3c irc) 
for help if you need it.


[1] http://testthewebforward.org/docs/



Re: [whatwg] Supporting feature tests of untestable features

2015-04-02 Thread James Graham
On 02/04/15 09:36, Simon Pieters wrote:

 I think we should not design a new API to test for features that should
 already be testable but aren't because of browser bugs. Many in that
 list are due to browser bugs. All points under HTML5 are browser bugs
 AFAICT. Audio/video lists some inconsistencies (bugs) where it makes
 more sense to fix the inconsistency than to spend the time implementing
 an API that allows you to test for the inconsistency.

[...]

 A good way to avoid bugs is with test suites. We have web-platform-tests
 for cross-browser tests.

Yes, this.

The right way to avoid having to detect bugs is for those bugs to not
exist. Reducing implementation differences is a critical part of making
the web platform an attractive target to develop for, just like adding
features and improving performance.

Web developers who care about the future of the platform can make a huge
difference by engaging with this process. In particular libraries like
Modernizr should be encouraged to adopt a process in which they submit
web-platform tests for each interoperability issue they find, and report
the bugs to browser vendors with a link to the test. This means both
that existing buggy implementations are likely to be fixed — because a
bug report with a test and an impact on a real product are the best,
highest priority, kind — and are likely to be avoided in future
implementations of the feature.

I really think it's important to change the culture here so that people
understand that they have the ability to directly effect change on the
web-platform, and not just through standards bodies, rather than
regarding it as something out of their control that must be endured.


Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread James Graham
On 18/11/14 23:14, Sam Ruby wrote:
 Note: I appear to have direct update access to urltestdata.txt, but I
 would appreciate a review before I make any updates.

FYI all changes to web-platform-tests* are expected to be via GH pull
request with an associated code review, conducted by someone other than
the author of the change, either in GitHub or at some other public
location (e.g. critic, a bug in bugzilla, etc.) (c.f. [1])

* With a few exceptions that are not relevant to the current case e.g.
bumping the version of submodules.

[1] http://testthewebforward.org/docs/review-process.html



Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread James Graham
On 19/11/14 14:55, Domenic Denicola wrote:

 web-platform-tests is huge.  I only need a small piece.  So for
 now, I'm making do with a wget in my Makefile, and two patch
 files which cover material that hasn't yet made it upstream.
 
 Right, I was suggesting the other way around: hosting the
 evolving-along-with-the-standard testdata.txt inside whatwg/url, and
 letting web-platform-tests pull that in (with e.g. a submodule).
 

That sounds like unnecessary complexity to me. It means that random
third party contributers need to know which repository to submit changes
to if they edit the urld testata file. It also means that we have to
recreate all the infrastructure we've created around web-platform-tests
for the URL repo.

Centralization of the test repository has been a big component of making
contributing to testing easier, and I would be very reluctant to
special-case URL here.


Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread James Graham
On 19/11/14 16:02, Domenic Denicola wrote:
 From: whatwg [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of
 James Graham
 
 That sounds like unnecessary complexity to me. It means that random
 third party contributers need to know which repository to submit
 changes to if they edit the urld testata file. It also means that
 we have to recreate all the infrastructure we've created around
 web-platform-tests for the URL repo.
 
 Centralization of the test repository has been a big component of
 making contributing to testing easier, and I would be very
 reluctant to special-case URL here.
 
 Hmm. I see your point, but it conflicts with what I consider a best
 practice of having the test code and spec code (and reference
 implementation code) in the same repo so that they co-evolve at the
 exact same pace. Otherwise you have to land multi-sided patches to
 keep them in sync, which inevitably results in the tests falling
 behind. And worse, it discourages the practice of not making any spec
 changes without any accompanying test changes.

In practice very few spec authors actually do that, for various reasons
(limited bandwidth, limited expertise, limited interest in testing,
etc.). Even when they do, designing the system around the needs of spec
authors doesn't work well for the whole lifecycle of the technology;
once the spec is being implemented and shipped it is likely that those
authors will have moved on to spend most of their time on other things,
so won't want to be the ones writing new tests for last year's spec.
However implementation and usage experience will reveal bugs and suggest
areas that require additional testing. These tests will be written
either by people at browser vendors or by random web authors who
experience interop difficulties.

It is one of my goals to make sure that browser vendors — in particular
Mozilla — not only run web-platform-tests but also write tests that end
up upstream. Therefore I am very wary of adding additional complexity to
the contribution process. Making each spec directory a submodule would
certainly do that. Making some spec directories, but not others, into
submodules would be even worse.

 That's why for streams the tests live in the repo, and are run
 against the reference implementation every commit, and every change
 to the spec is accompanied by changes to the reference implementation
 and the tests. I couldn't imagine being able to maintain that
 workflow if the tests lived in another repo.
 

Well you could do it of course for example by using wpt as a submodule
of that repository or by periodically syncing the test files to wpt.

As it is those tests appear to be written in a way that makes them
incompatible with web-platform-tests and useless for testing browsers.
If that's true, it doesn't really support the idea that we should
structure our repositories to prioritise the contributions of spec
authors over those of other parties.


Re: [whatwg] Notifications: making requestPermission() return a promise

2014-10-01 Thread James Graham
On 01/10/14 14:21, Tab Atkins Jr. wrote:
 On Wed, Oct 1, 2014 at 9:18 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Wed, Oct 1, 2014 at 3:14 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Wait, what?  Anytime you request something, not getting it is
 exceptional.  Not sure how you can make an argument otherwise.

 I would not expect a synchronous version of this method (were it to
 exist) to have to use try/catch for anything other than invoking it
 with an argument such as TEST, which is clearly wrong. That's why I
 don't think it's exceptional (e.g. warrants an exception/rejection).
 
 And I wouldn't expect someone loading a FontFace synchronously to use
 try/catch to deal with loading errors, either, because that's super
 obnoxious.  Failure, though, is a standard rejection reason - it maps
 to the use of onerror events.

Isn't this just a problem that we have three possible outcomes:

* Permission grant

* Permission reject

* Invalid input data

And three possible ways of routing the code:

* Promise fulfilled

* Promise rejected

* Exception

But we are only using two of them? In that case something has to give;
you either need to disambiguate user grant vs user reject in the fulfill
function or user reject vs invalid data in the rejection function.
Neither seems obviously to have better ergonomics than the other.


Re: [whatwg] Adding a property to navigator for getting device model

2014-09-24 Thread James Graham
On 24/09/14 02:54, Jonas Sicking wrote:

 In the meantime, I'd like to add a property to window.navigator to
 enable websites to get the same information from there as is already
 available in the UA string. That would at least help with the parsing
 problem.
 
 And if means that we could more quickly move the device model out of
 the UA string, then it also helps with the UA-string keying thing.

It's not entirely clear this won't just leave us with the device string
in two places, and unable to remove either of them. Do we have any
evidence that the sites using UA detection will all change their code in
relatively short order, or become unimportant enough that we are able to
break them?



Re: [whatwg] [Fetch] API changes to make stream depletion clearer/easier

2014-08-23 Thread James Graham
On 22/08/14 19:29, Brian Kardell wrote:
 On Fri, Aug 22, 2014 at 1:52 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Fri, Aug 22, 2014 at 7:15 PM, Brian Kardell bkard...@gmail.com wrote:
 I still think that calling it bodyStream actually helps understanding
 all you need and it's short/portable...

 response.bodyStream.asJSON()

 seems to at least give the hint that it is a stream that is consumed
 without getting too crazy.

 Well 1),

   response.json()

 is short too, much shorter in fact.

 
 It is, but there was concern that is was too short to be clear/might
 actually be confusing before it was further shortened.  Making it
 shorter doesn't help that objection - it doesn't make it clearer, does
 it?  I'm not saying this is best I'm offering a proposal that tries
 to strike the balance with this fact - that's all there is to my
 comment.

So my opinion is that there are two possible scenarios:

1) The API is consistent and friendly enough that, after an initial
period of learning how it works, developers will internalize the
semantics. In this case the short names are sufficient to describe the
functionality and should be preferred because they increase the signal /
noise ratio when reading and writing the code.

2) The API has semantics that are so liable to trip up developers that,
without reminder of the behaviour, they will constantly make mistakes.
In this case we should be working out how to design a less unfriendly
API, not bikeshedding which function naming scheme will make the problem
least bad.

I am slightly concerned that the amount of discussion around naming here
belies a belief that the underlying model is going to cause frustration
for developers. Is that the case?



Re: [whatwg] [Fetch] API changes to make stream depletion clearer/easier

2014-08-21 Thread James Graham
On 21/08/14 18:52, Jake Archibald wrote:
 take was suggested in IRC as an alternative to consume, which has
 precedence http://dom.spec.whatwg.org/#dom-mutationobserver-takerecords
 
 I'm still worried we're querySelectorAlling (creating long function names
 for common actions), but I can live with:
 response.takeBodyAsJSON().

I think that adding an extra verb to the names to describe a consistent
feature of the API is a mistake; it seems important when designing the
API because it's a choice that you have to make, but for the user it's
just part of how the API works and not something that needs to be
reemphasized in the name of every piece of API surface. For example
given a language with immutable strings it would be pure noise to call a
method appendAsNewString compared to just append because all
mutation methods would consistently create new strings.



Re: [whatwg] Proposal: navigator.cores

2014-05-13 Thread James Graham

  On Fri, May 9, 2014 at 9:56 AM, David Young dyo...@pobox.com wrote:



The algorithms don't have to run as fast as possible, they only have to
run fast enough that the system is responsive to the user.  If there is
a motion graphic, you need to run the algorithm fast enough that the
motion isn't choppy.



That's not correct.  For image processing and compression, you want to use
as many cores as you can so the operation completes more quickly.  For the
rest, using more cores means that the algorithm can do a better job, giving
a more accurate physics simulation, detecting motion more quickly and
accurately, and so on.



I think the problem that I have with this API is the number of cores 
that exist isn't obviously a good proxy for the number of cores that 
are available. It I have N cores and am already using M cores for e.g. 
decompressing video, N-M is probably a much better estimate of the 
available resources than N. I suppose for some applications e.g. games, 
scientific simulations, people are likely to set up their system with 
M=0 before they start. However that isn't obviously the common case.


Re: [whatwg] Simplified picture element draft

2013-11-25 Thread James Graham

On 25/11/13 10:32, Kornel Lesiński wrote:

On 25 November 2013 08:00:10 Yoav Weiss y...@yoav.ws wrote:


It contains some parts that I'm not sure have a consensus around them
yet:
* It defines picture as controlling img, where earlier on this
list we
discussed mostly the opposite (img querying its parent picture, if
one
exists)


Controlling image is a great idea. It greatly simplifies the spec and
hopefully implementations as well.

I chose not to expose that implementation detail, assuming that one day
(when all UAs, crawlers implement it) we will not need explicit img
fallback any more.


This suffers from some of the same problems that were previously brought 
up with picture; because it defines a new element that should behave 
like img you have to test that the new element works in all the same 
places that img ought to work. The fact that the spec tries to define 
this in terms of the shadow DOM isn't really helpful; you still need to 
ensure that implementations actually proxy the underlying img 
correctly in all situations.


The advantage of the scheme that zcorpan proposed is that there is no 
magic proxy; we just add a capability to img to select its source 
using more than just a src attribute. This has better fallback than your 
design and is easier to implement.


Re: [whatwg] The src-N proposal

2013-11-20 Thread James Graham

On 19/11/13 22:07, Simon Pieters wrote:


The selection algorithm would only consider source elements that are
previous siblings of the img if the parent is a picture element, and
would be called in place of the current 'process the image candidates'
in the spec (called from 'update the image data'). 'Update the image
data' gets run when an img element is created, has its src or
crossorigin (or srcset if we still want that on img) attributes
changed/set/removed, is inserted or removed from its parent, when
source is inserted to a picture as a previous sibling, or a source
that is a previous sibling is removed from picture, or when a source
that is a previous sibling and is in picture has its src or srcset (or
whatever attributes we want to use on source) attributes
changed/set/removed. 'Update the image data' aborts if the
parser-created flag is set. When img is inserted to the document, if the
parser-created flag is set, the flag is first unset and then 'update the
image data' is run but without the await a stable state step.


This seems like a nice proposal. There seems to be a minor problem that 
elements created through innerHTML will have the parser created flag set 
and so will not start loading until they are inserted into the document. 
So you probably want to call the flag the delayed load flag or 
somesuch, and only set it if the parser isn't in the fragment case.




Re: [whatwg] The src-N proposal

2013-11-20 Thread James Graham

On 20/11/13 12:07, Simon Pieters wrote:

On Wed, 20 Nov 2013 12:30:18 +0100, James Graham
ja...@hoppipolla.co.uk wrote:


This seems like a nice proposal. There seems to be a minor problem
that elements created through innerHTML will have the parser created
flag set and so will not start loading until they are inserted into
the document. So you probably want to call the flag the delayed load
flag or somesuch, and only set it if the parser isn't in the fragment
case.


Yeah, indeed, thanks.

A separate case I was thinking about is more than one imgs in a
picture, do we want both to work or just the first? The proposal right
now would do both. If we want only the first, that means the selection
algorithm needs to check that there are no previous img siblings. When
an img is inserted to a picture so it becomes the first img, we need to
rerun the selection algorithm on the next img sibling (i.e. the img
element that was previously the first). Similarly when an img element is
removed, the (new) first img child needs to run the selection algorithm.
Although it involves more checks, I think it seems saner to have only
the first img use the sources.



I'm not sure that the extra checks buy you much apart from 
implementation complexity. What are you trying to protect against?


Re: [whatwg] The src-N proposal

2013-11-20 Thread James Graham

On 20/11/13 14:19, Shane Hudson wrote:

On Wed, Nov 20, 2013 at 12:32 PM, Yoav Weiss y...@yoav.ws wrote:


I think it's worth while to enable the `sizes` attribute and url/density
pairs on img as well.
It would enable authors that have just variable-width images with no
art-direction to avoid adding a picture with a single source.



+1 to this, I think it would cover all bases nicely and not require extra
markup for simple images.



(note that +1-type messages without additional substantive information 
are generally discouraged on this list. Not meaning to single you out 
particularly; there have been a few in this thread recently).


Re: [whatwg] The src-N proposal

2013-11-19 Thread James Graham

On 19/11/13 01:55, Kornel Lesiński wrote:

On Tue, 19 Nov 2013 01:12:12 -, Tab Atkins Jr. jackalm...@gmail.com
wrote:


AFAIK it makes it as easy to implement and as safe to use as src-N.

Simon, who initially raised concerns about use of source in picture
found that solution acceptable[2].

I'd love to hear feedback about simplified, atomic source from other
vendors.


The cost there is that picturesource is now treated substantially
differently than videosource, despite sharing a name.


The substantial difference is that it lacks JS API exposing
network/buffering state, but IHMO that's not a big loss, as those
concepts are not as needed for pictures.

IMHO the important thing is that on the surface (syntactical level)
they're the same - multiple source elements where the first one matches.


So the remaining objections I am aware of to atomic-source are:

* Something related to animations. I don't actually understand this, so 
it would be nice if someone who does would explain. Alternatively this 
might not actually be an issue.


* Verbosity. This proposal is clearly verbose, but it is also the one 
that authors seem to prefer, largely because it uses the underlying 
markup syntax in a natural way. It seems that people will likely deal 
with the verbosity by copy and paste, templates or libraries to provide 
a convenient shorthand. If the latter occurs we can look at 
standardising it later.


* More testing is needed. Specifically it seems that tests will be 
needed to use source elements (or picture elements?) where you can 
currently use img elements. This is a real concern of course, but 
seems lower on the priority of constituencies than authoring concerns, 
unless we think that poor interop will poison the feature. With an 
atomic proposal this seems much less likely, Hopefully implementations 
will be able to reuse the existing img code so that the actual amount 
of new *code* to test is less than you might think by looking at the 
extra API surface.




Re: [whatwg] The src-N proposal

2013-11-18 Thread James Graham

On 18/11/13 03:25, Daniel Cheng wrote:

On Mon, Nov 18, 2013 at 12:19 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:


On Sun, Nov 17, 2013 at 5:16 AM, Ryosuke Niwa rn...@apple.com wrote:

Without starting a debate on what semantics or aesthetics mean, syntax

is a big deal.  A bad syntax can totally kill a feature.

Believe me, I agree; I named my last coding project Bikeshed, after all.
^_^

This is why I find it puzzling that a syntax accepted by the RICG and
a lot of authors is being shot down by a few implementors.  This is
why I've been classifying the objections as personal aesthetic
concerns - I don't know how to classify them otherwise.  The proposed
syntax doesn't seem to offend average authors, who grasp it well
enough (it's a pretty simple translation from what they already liked
in picture).  It just offends a few of you from WebKit, some of whom
have been a bit hyperbolic in expressing their dislike.

~TJ



​I think it's worth pointing out that there are some ​Chromium/Blink
developers that don't like the multiple attribute syntax either (for what
it's worth, I am one of them).


Yeah, I think this characterization of the debate as Apple vs the 
World is inaccurate an unhelpful. I think that the src-N proposal is 
very ugly indeed. This ugliness creates real issues e.g. if I have 
src-1, src-2 [...] and I decide I want a rule that is consulted between 
src-1 and src-2, I need to rewrite all my attribute names. Whilst this 
might produce a pleasant rush of nostalgia for children of the 80s 
brought up on 8-bit Basic, for everyone else it seems like an 
error-prone drag.


So I think the question is not is this proposal unpleasant; it is. The 
question is is this less unpleasant than the alternatives. That is 
much less clear cut, and there is room for reasonable people to disagree.


Re: [whatwg] The src-N proposal

2013-11-18 Thread James Graham

On 18/11/13 16:36, matmarquis.com wrote:

I recall that some of the more
specific resistance was due to the complication involved in
implementing and testing existing media elements, but I can’t claim
to understand precisely what manner of browser-internal complications
`source` elements brought to the table.


The fundamental issue is atomicity; setting one or N attributes is an 
atomic operation from the point of view of script; creating N elements 
is not. This creates complexity because the algorithm has to deal with 
the possibility of DOM mutation changing the set of available sources 
before it has selected the correct one. I believe there was a proposal 
that simplified the semantics by ignoring mutations, but I hear it ran 
into problems with animated images, which I haven't understood in detail.


Re: [whatwg] onclose events for MessagePort

2013-10-10 Thread James Graham

On 10/10/13 18:14, David Barrett-Kahn wrote:

On GC being a source of cross-browser difficulty: I think you can fix that
by stating in the messageport spec when we guarantee to implicitly close
the connection (when its host page closes) and when we provide no
guarantees (when it loses all its references).

On people relying on GC timing: Those people are being silly and deserve
what they get, as they do in Java.  Using destructors in that language is
very nearly always a bad idea, but they still put them there and it was
fine.


The problem is that it's not perceived as the fault of the page author, 
but as the fault of the browser in which the page fails to work (which 
may indeed be a browser in which it previously did work and that then 
happened to upgrade its GC implementation).


The difference between the web and Java is that with Java you can 
mandate a particular version of a particular implementation, even if it 
is considered ugly to do so. With the web that isn't possible.




Re: [whatwg] API to delay the document load event

2013-04-29 Thread James Graham

On 04/29/2013 05:26 AM, Robert O'Callahan wrote:

On Mon, Apr 29, 2013 at 3:11 PM, Glenn Maynard gl...@zewt.org wrote:


On Sun, Apr 28, 2013 at 9:11 PM, Robert O'Callahan rob...@ocallahan.orgwrote:


It would be easy for us to add some Firefox-only or FirefoxOS-only API
here, but that seems anti-standards. I'd rather unnecessarily standardize a
feature that doesn't get broadly used, than propagate some Firefox-only
feature that does get broadly used.



If it's a feature that will only actually be used in FirefoxOS, then
expecting other browser vendors to invest time implementing it wouldn't
make sense.



  If it doesn't get used, why would they need to invest time implementing it?

Also, this is a feature where it's trivial for applications to gracefully
degrade on browsers that don't have the feature.


I'm not sure that's true. I mean, it's *possible* but you have to be 
careful to never depend on anything that could happen after the 
natural load event in e.g. your load event handler. I can quite easily 
see people getting that wrong.


In general this seems quite a scary design. The load event is rather 
intimately tied in to the lifecycle of the document, and encouraging 
people to arbitrarily delay it feels like a potential source of bugs and 
confusion.


Is getting screenshots of pages for thumbnails really something that 
needs an author-facing API? In general the concept of fully loaded 
doesn't make any sense for a class of modern web applications, which 
might keep loading content or changing their presentation across their 
liefetime. Therefore it seems like simply taking one screenshot at page 
load and replacing it with one a little later after a timeout might be a 
good-enough solution.


Re: [whatwg] API to delay the document load event

2013-04-29 Thread James Graham

On 04/29/2013 11:42 AM, Robert O'Callahan wrote:

On Mon, Apr 29, 2013 at 8:56 PM, James Graham jgra...@opera.com
mailto:jgra...@opera.com wrote:

On 04/29/2013 05:26 AM, Robert O'Callahan wrote:

Also, this is a feature where it's trivial for applications to
gracefully
degrade on browsers that don't have the feature.


I'm not sure that's true. I mean, it's *possible* but you have to be
careful to never depend on anything that could happen after the
natural load event in e.g. your load event handler. I can quite
easily see people getting that wrong.


I'm not sure what you're getting at here.


I mean, let's say you delay the load event until after some data has 
loaded over a web socket. If you try to use that data from the load 
event handler it can fail in a racy way in UAs that don't support 
delaying the load event. This also seems like the kind of race that you 
are more likely to win on a local network, so it wouldn't necessarily be 
caught during development.



In general this seems quite a scary design. The load event is rather
intimately tied in to the lifecycle of the document, and encouraging
people to arbitrarily delay it feels like a potential source of bugs
and confusion.


Adding new things that delay the load event has not been a source of
bugs and confusion in my experience. Authors do it a lot and we've done
it in specs too.


So far we have kept the model where the load event is auomatically 
managed by the UA, rather than giving the developer direct control of it.



Is getting screenshots of pages for thumbnails really something that
needs an author-facing API? In general the concept of fully loaded
doesn't make any sense for a class of modern web applications, which
might keep loading content or changing their presentation across
their liefetime. Therefore it seems like simply taking one
screenshot at page load and replacing it with one a little later
after a timeout might be a good-enough solution.


The problem is when you next load the application. You don't want to
replace a good screenshot with a screenshot of the application saying
Loading


Then don't replace the screenshot with one taken at the load-event time 
if you already have one.




Re: [whatwg] API to delay the document load event

2013-04-29 Thread James Graham

On 04/29/2013 03:51 PM, Boris Zbarsky wrote:

On 4/29/13 6:50 AM, James Graham wrote:

So far we have kept the model where the load event is auomatically
managed by the UA, rather than giving the developer direct control of it.


Developers already have direct control over the load event to the extent
being proposed, as far as I can tell.  Consider this:

   var blockers = [];
   function blockOnload() {
 var i = document.createElement(iframe);
 document.documentElement.appendChild(i);
 blockers.push(i.contentDocument);
 i.contentDocument.open();
   }

   function unblockOnload() {
 blockers.pop().close();
   }

Of course expecting web developers to come up with this themselves and
have to redo all this boilerplate is not reasonable, not to mention the
pollutes-the-DOM and uses-way-too-much-memory aspect of it all.


Yes, I wasn't clear that I was referring to what is encouraged through 
having a documented API, rather than what is possible when one uses the 
existing APIs in innovative ways.


Re: [whatwg] HTML5 Tokenizer Test Cases and Correct Output

2013-03-28 Thread James Graham

On 03/28/2013 12:06 PM, Mohammad Al Houssami (Alumni) wrote:

Hello everyone.

I was wondering if there is some sort of tests for the Tokenizer along with the 
correct output of tokens as well as a way of representing tokens.
What I have in mind is running the tokenizer on some HTML input and printing 
the tokens in the same way the correct output is written.
I will  then be comparing the result I have with the correct one provided 
character by character. :)


http://code.google.com/p/html5lib/source/browse/#hg%2Ftestdata%2Ftokenizer

http://wiki.whatwg.org/wiki/Parser_tests has some documentation of the 
format.


Re: [whatwg] Need to define same-origin policy for WebIDL operations/getters/setters

2013-01-09 Thread James Graham

On Wed, 9 Jan 2013, Boris Zbarsky wrote:


On 1/9/13 4:12 PM, Adam Barth wrote:

   window.addEventListener.call(otherWindow, click, function() {});


This example does not appear to throw an exception in Chrome.  It
appears to just returns undefined without doing anything (except
logging a security error to the debug console).


Hmm.  I may be able to convince that turning security errors like this into 
silent no-ops returning undefined is ok, but throwing an exception seems like 
a much better idea to me if you're going to completely not do what you were 
asked to do...  The other option introduces hard-to-debug bugs.


FWIW I have run into this behaviour in WebKit in the context of using the 
platform, and I considered it very user-hostile.


Re: [whatwg] Question on Limits in Adaption Agency Algorithm

2012-12-12 Thread James Graham

On Wed, 12 Dec 2012, Ian Hickson wrote:


On Wed, 12 Dec 2012, Henri Sivonen wrote:

On Sat, Dec 8, 2012 at 11:05 PM, Ian Hickson i...@hixie.ch wrote:

the order between abc and xyz is reversed in the tree.


Does anyone have any preference for how this is fixed?


Does it need to be fixed? That is, is it breaking real sites?


It reverses the order of text nodes. That's ridiculously unintuitive. Yes,
I think it needs solving, even if it isn't hit by any sites.

(If it's hit by sites, it seems likely that they are breaking because of
it. If it isn't, then we can safely change it regardless.)


Although changing it does introduce the possibility of unforeseen 
regressions. Not that I have a strong opinion here, really.


Re: [whatwg] Location object identity and navigation behavior

2012-11-09 Thread James Graham

On 11/08/2012 07:19 PM, Bobby Holley wrote:

The current spec for the Location object doesn't match reality. At the
moment, the spec says that Location is a per-Window object that describes
the associated Document. However, in our testing, it appears that none of
the user-agents (Gecko, WebKit, Trident, Presto) do this [1]. Instead, all
implementations of Location describe the active document in the browsing
context (that is to say, the referent of the WindowProxy). This suggests
that the spec's current language is likely not web-compatible.

If the Location object describes the browsing context, we're left to
consider whether there should be one Location object per Window or one
Location object per browsing context. Gecko and Webkit currently do the
former, and Trident and Presto do the latter (see again [1]). I would like
to change Gecko's behavior here [2], because would simplify a lot of
security invariants and generally make things more sane. How do WebKit
folks feel about this?

If Location follows the WindowProxy, an interesting question is what
happens to expando properties on navigation. I did some testing, and UAs
seem to have pretty inconsistent behavior here [3]. As such, I think the
sanest policy is simply to clear expandos on Location each time the page is
navigated. This is the approach I've taken in the patches in [2].

Thoughts?


Nothing specific on the design, but whatever the final consensus here is 
*please* submit your testcases for everyone to use. This stuff is 
difficult, and very hard to write tests for when you are not actively 
implementing. Without a good shared library of tests in this area we 
will probably have bad interoperability for many years to come.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-07 Thread James Graham

On 11/07/2012 05:52 PM, Ojan Vafai wrote:

On Wed, Nov 7, 2012 at 6:23 AM, Simon Pieters sim...@opera.com wrote:


My impression from TPAC is that implementors are on board with the idea of
adding main to HTML, and we're left with Hixie objecting to it.



For those of use who couldn't make it, which browser vendors voiced
support? I assume Opera since you're writing this thread.


To be clear, Opera didn't voice support for anything. Some people from 
Opera suggested that it seemed like a reasonable idea (I think it seems 
like a reasonable idea).



Hixie's argument is, I think, that the use case that main is intended to

address is already possible by applying the Scooby-Doo algorithm, as James
put it -- remove all elements that are not main content, header, aside,
etc., and you're left with the main content.

I think the Scooby-Doo algorithm is a heuristic that is not reliable
enough in practice, since authors are likely to put stuff outside the main
content that do not get filtered out by the algorithm, and vice versa.

Implementations that want to support a go to main content or highlight
the main content, like Safari's Reader Mode, or whatever it's called, need
to have various heuristics for detecting the main content, and is expected
to work even for pages that don't use any of the new elements. However, I
think using main as a way to opt out of the heuristic works better than
using aside to opt out of the heuristic. For instance, it seems
reasonable to use aside for a pull-quote as part of the main content, and
you don't want that to be excluded, but the Scooby-Doo algorithm does that.

If there is anyone besides from Hixie who objects to adding main, it
would be useful to hear it.



This idea doesn't seem to address any pressing use-cases.


I think that finding the main content of a page has clear use cases. We 
can see examples of authors working around the lack of this feature in 
the platform every time they use a skip to main link, or (less 
commonly) aria role=main. I believe we also see browsers supporting 
role=main in their AT mapping, which suggests implementer interest in 
this approach since the solutions are functionally isomorphic (but with 
very different marketing and usability stories).


I think the argument that the Scooby Doo algorithm is deficient because 
it requires many elements of a page to be correctly marked up, compared 
to main which requires only a single element to get the same 
functional effect, has merit. The observation that having one element on 
a page marked — via class or id —  main is already a clear cowpath 
enhances the credibility of the suggested solution. On the other hand, I 
agree that now everyone heading down the cowpath was aiming for the same 
place; a div class=main wrapping the whole page, headers, footers, and 
all is clearly not the same as one that identifies the extent of the 
primary content. I don't know how these different uses stack up 
(apologies if it is in some research that I overlooked).



I don't expect
authors to use it as intended consistently enough for it to be useful in
practice for things like Safari's Reader mode. You're stuck needing to use
something like the Scooby-Doo algorithm most of the time anyways.


I think Maciej commented on this. IIRC, he said that it wouldn't be good 
enough for reader mode alone, but might usefully provide an extra piece 
of data for the heuristics.



I don't
outright object, but I think our time would be better spent on addressing
more pressing problems with the web platform.


I think that's a very weak argument. In fact, given the current 
landscape I would expect this to swallow more of the web standards 
communities' time if it is not adopted than if it is. But I don't think 
that's a strong argument in favour of adopting it either.




Re: [whatwg] Proposal for Links to Unrelated Browsing Contexts

2012-10-02 Thread James Graham

On 10/02/2012 02:34 AM, Boris Zbarsky wrote:

On 10/1/12 6:10 PM, Ian Hickson wrote:

On Tue, 19 Jun 2012, Boris Zbarsky wrote:

On 6/19/12 1:56 PM, Charlie Reis wrote:

That's from the [if] the user agent determines that the two browsing
contexts are related enough that it is ok if they reach each other
part, which is quite vague.


This is, imo, the part that says unrelated browsing contexts should not
be able to reach each other by name.

It's only vague because hixie wanted all current implementations to be
conforming, I think.  Which I believe is a mistake.


I'm happy to make the spec not match implementations, if the
implementations are going to change to match the spec. :-)


I certainly plan to change Gecko to make this stuff less lose there.


I have no idea why this part of the spec is special enough to get 
undefined behaviour when we have tried to avoid it on general principle 
everywhere else.




Re: [whatwg] Safari, Opera and Navigation Timing API

2012-08-29 Thread James Graham

On 08/29/2012 11:46 AM, Andy Davies wrote:

Anyone know when Safari and Opera are likely to support the Navigation
Timing API? http://www.w3.org/TR/navigation-timing/


In general we (Opera) don't discuss our roadmap. In particular I can't 
offer you any estimates of when features will ship. Sorry.


But we are certainly aware of navigation timing and the fact that it is 
a desirable feature, and are treating it accordingly.


(side note: this seems a little off topic for the whatwg list)


Re: [whatwg] StringEncoding: Allowed encodings for TextEncoder

2012-08-08 Thread James Graham

On 08/07/2012 07:51 PM, Jonas Sicking wrote:


I don't mind supporting *decoding* from basically any encoding that
Anne's spec enumerates. I don't see a downside with that since I
suspect most implementations will just call into a generic decoding
backend anyway, and so supporting the same set of encodings as for
other parts of the platform should be relatively easy.


[...]


However I think we should consider restricting support to a smaller
set of encodings for while *encoding*. There should be little reason
for people today to produce text in non-utf formats. We might even be
able to get away with only supporting UTF8, though I wouldn't be
surprised if there are reasonably modern file formats which use utf16.


FWIW, I agree with the decode-from-all-platform-encodings 
encode-to-utf[8|16] position.


Re: [whatwg] Features for responsive Web design

2012-08-08 Thread James Graham

On 08/08/2012 12:27 PM, Markus Ernst wrote:


It is better because art direction and bandwidth use cases can be solved
differently in an appropriate manner:
- For the bandwidth use case, no MQ is needed, but only some information
on the sources available to let the UA decide which source to load.
- For the art direction use case OTOH, the picture element is more
intuitive to handle and also easier to script, as sources can be added
or removed via DOM.


What are the use cases for adding/removing images? It seems to me that 
they would be better addressed by having a good API for interacting with 
srcset rather than adopting an element based design. For example one 
could have HTMLImageElement.addSrc(url, options) where options is a 
dictionary allowing you to set the various srcset options.





Re: [whatwg] register*Handler and Web Intents

2012-08-03 Thread James Graham

On 08/02/2012 06:57 PM, Ian Hickson wrote:


But now consider the short-term cost of adding an element to the head. All
it does is make a few elements in the head leak to the body. The page
still works fine in legacy UAs (none of the elements only work in the
head).


But it will break any scripts or selectors that depend on position in 
the DOM. For that reason I expect many pages that include intents won't 
work fine in UAs that don't have parser support. I agree with Henri 
that it is extremely worrying to allow aesthetic concerns to trump 
backward compatibility here.


I would also advise strongly against using position in DOM to detect 
intents support; if you insist on adding a new void element I will 
strongly recommend that we add it to the parser asap to try and mitigate 
the above breakage, irrespective of whether our plans for the rest of 
the intent mechanism.


[whatwg] Load events fired during onload handlers

2012-07-30 Thread James Graham
There seems to be general agreement (amongst browsers, not yet the spec) 
that if a document does something that causes a new load event from 
within an onload handler (document.open/document.close_ the second load 
event is not dispatched. This also applies to the load event on iframe 
elements if an event handler in the iframe would synchronously cause a 
second load event to fire.


There is not agreement about what happens where there are multiple 
frames e.g. if a load event handler on iframe element A would cause a 
load event in iframe B, should the handler on B fire. Gecko says yes, 
WebKit no. There is a slightly rubbish demo at [1].


I don't think I have a strong opinion about what should happen here, but 
the Gecko behaviour could be easier to implement, and the WebKit 
behaviour slightly safer (presumably the point of this anomaly is to 
prevent infinite loops in load event handers).


[1] http://software.hixie.ch/utilities/js/live-dom-viewer/saved/1686


Re: [whatwg] Load events fired during onload handlers

2012-07-30 Thread James Graham

On 07/30/2012 05:44 PM, Boris Zbarsky wrote:

On 7/30/12 11:10 AM, James Graham wrote:

I don't think I have a strong opinion about what should happen here, but
the Gecko behaviour could be easier to implement, and the WebKit
behaviour slightly safer (presumably the point of this anomaly is to
prevent infinite loops in load event handers).


In Gecko's case, the only thing like that I know of is that onload fires
synchronously in Gecko in some cases, I believe. So we had to put in
some sort of recursion guard to prevent firing onload on a parent in the
middle of a child firing onload or something like that.  See
https://bugzilla.mozilla.org/show_bug.cgi?id=330089.  Per spec, onload
is always async, so this wouldn't be a concern.


Yeah, but as far as I can tell all browsers block (same document) load 
events that happen from inside onload [1], so I *guess* at some point in 
the past a site got into an infinite loop by trying to use document.open 
from inside onload.



I'm not quite sure what causes the behavior you're seeing in Gecko at
http://software.hixie.ch/utilities/js/live-dom-viewer/?saved=1686, but
at first glance it's sort of accidental...  Which doesn't mean we
shouldn't spec it, of course; it just means that figuring out what to
spec is harder.  :(

If desired, I can try to figure out exactly why there's only one load
event on the first iframe there.  Let me know.


That would be really helpful.

[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=17231


Re: [whatwg] Proposal for Links to Unrelated Browsing Contexts

2012-06-19 Thread James Graham

On Tue, 19 Jun 2012, Charlie Reis wrote:


On Tue, Jun 19, 2012 at 11:38 AM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 6/19/12 1:56 PM, Charlie Reis wrote:
That's from the [if] the user agent
determines that the two browsing contexts are related enough that 
it is
ok if they reach each other part, which is quite vague.


This is, imo, the part that says unrelated browsing contexts should not be able 
to reach each other by name.

It's only vague because hixie wanted all current implementations to be 
conforming, I think.  Which I believe is a mistake.


Then the wording should be changed.  However, that belongs in a different 
proposal than this one.


The way the process here works is that Hixie reads these emails agrees 
that the change is a good idea (hopefully; in this case it seems likely 
since we seem to have three implementors in agreement) and it happens. 
There isn't any need for seperate proposals.


Of course it is also possible to file a bug if you want to track this 
specific point. (I sort of thought I had already filed a bug here but I 
can't find it now so maybe I imagined it).


(aside: your mail client seems to be mangling quotes in plaintext mail. 
This makes your replies very hard to follow).

Re: [whatwg] Proposal for Links to Unrelated Browsing Contexts

2012-06-14 Thread James Graham

On 06/14/2012 04:06 AM, Boris Zbarsky wrote:

On 6/13/12 7:44 PM, Michal Zalewski wrote:

The degree of separation between browsing contexts is intuitive in the
case of Chrome


Except it's not, because Chrome will sometimes put things in the same
process when they could have gone in different ones, based on whatever
heuristics it uses for deciding whether it's spawned enough processes.


Let's assume that there is no Chrome-style process isolation, and that
this is only implemented as not giving the target=_unrelated document
the ability to traverse window.opener. If the document's opener lives
in an already-named window (perhaps unwittingly), it won't be
prevented from acquiring the handle via open('',
'name_of_that_window'), right?


The spec needs to require that this be prevented


So AFAICT the spec does require that this is prevented for unrelated 
browsing contexts, except in the case where the two are same-origin 
which is allowed but with some fuzzy condition about [if] the user 
agent determines that the two browsing contexts are related enough that 
it is ok if they reach each other. As far as I can tell only Gecko 
implements that and it seems reasonable that others wouldn't want to 
have behaviour that requires multiple event loops to interact (assuming 
one event loop per unit of related browsing context).


Therefore I think that part of the spec should be changed to only reuse 
the same named window within a single unit of related browsing context.


Re: [whatwg] Navigation triggered from unload

2012-06-13 Thread James Graham

On 06/12/2012 08:56 PM, Boris Zbarsky wrote:

On 6/12/12 6:30 AM, James Graham wrote:

Based on some tests ([1]-[5]), it seems that WebKit seems to cancel the
navigation in the unload handler always, Opera seems to always carry out
the navigation in the unload handler, and Gecko seems to follow WebKit
in the cross-origin case and Opera in the same-origin case. In all cases
the unload handler is only called once.

[1] http://hoppipolla.co.uk/tests/navigation/003.html
[2] http://hoppipolla.co.uk/tests/navigation/004.html
[3] http://hoppipolla.co.uk/tests/navigation/005.html
[4] http://hoppipolla.co.uk/tests/navigation/006.html
[5] http://hoppipolla.co.uk/tests/navigation/007.html


For what it's worth, we initially tried to do what you say WebKit does
but ran into web compat issues. See
https://bugzilla.mozilla.org/show_bug.cgi?id=371360 for the original bug
where we blocked all navigation during unload and
https://bugzilla.mozilla.org/show_bug.cgi?id=409888 for the bug where we
changed to the current behavior. I believe the spec says what it says
based on our implementation experience here...


Hmm, so I wonder if the WebKit people consider it a problem that they 
don't pass the tests in those bug reports. I couldn't find any of the 
original sites still responding, so it's hard to know if there is still 
a compat. problem here. If there isn't, the greater conceptual 
simplicity of the WebKit model is quite appealing.



P.S. Opera's behavior is not quite as simple as you describe: as far as
I can tell it depends on whether the unload is happening due to the user
typing something in the url bar or due to the user clicking a link, say.


That seems to be true. On the other hand it appears that gecko will 
still respect navigation from unload even if the unload was triggered by 
explicit user interaction (e.g. by editing the address bar), as long as 
all the origins match, so you can end up at a different page to the one 
you expected. That is very surprising behaviour (although I see that you 
can argue that it is possible in other ways).


[whatwg] Navigation triggered from unload

2012-06-12 Thread James Graham
What is the expected behaviour of navigation triggered from unload 
handlers? In particular, what stops such navigations from re-triggering 
the unload handler, and thus starting yet another navigation?


It looks like the spec tries to make a distinction between navigations 
that are cross-origin and those that are not (step 4 in the navigating 
across documents algorithm); I'm not sure why this inconsistency is 
desirable rather than using the cross-origin approach always.


Based on some tests ([1]-[5]), it seems that WebKit seems to cancel the 
navigation in the unload handler always, Opera seems to always carry out 
the navigation in the unload handler, and Gecko seems to follow WebKit 
in the cross-origin case and Opera in the same-origin case. In all cases 
the unload handler is only called once.


[1] http://hoppipolla.co.uk/tests/navigation/003.html
[2] http://hoppipolla.co.uk/tests/navigation/004.html
[3] http://hoppipolla.co.uk/tests/navigation/005.html
[4] http://hoppipolla.co.uk/tests/navigation/006.html
[5] http://hoppipolla.co.uk/tests/navigation/007.html


Re: [whatwg] Bandwidth media queries

2012-05-21 Thread James Graham

On 05/21/2012 04:34 PM, Boris Zbarsky wrote:

On 5/21/12 10:09 AM, Mounir Lamouri wrote:

On 05/20/2012 03:04 PM, Boris Zbarsky wrote:

On 5/20/12 5:45 AM, Paul Irish wrote:

Since no one mentioned it, I just wanted to make sure this thread is
aware
of the Network Information API [1], which provides
navigator.connection.bandwidth

It's been recently implemented (to some degree) in both Mozilla [2] and
Webkit [3].


As far as I can tell, the Mozilla implementation always returns Infinity
for .bandwidth.


This is not true. There is an implementation for Firefox Android which
is based on the connection type.


Ah, indeed. I had missed that codepath.

If I'm reading the right code now, that looks like it returns a constant
value for each connection type (e.g. if you're connected via Ethernet or
Wifi it returns 20; if you're connected via EDGE it returns 0.2, etc).


This suggests that the API is extremely silly; although one could 
presumably claim that an estimate allows one to return anything, I 
don't see how returning 20 if the server is feeding you 1 byte/second 
can be helpful to anyone. If no one plans to implement this as a loose 
proxy for type of connection the spec shouldn't pretend to do more 
than that.


Can you point me to the discussion of usecases that led to this design?


Re: [whatwg] Bandwidth media queries

2012-05-21 Thread James Graham

On 05/21/2012 04:50 PM, Boris Zbarsky wrote:

On 5/21/12 10:42 AM, James Graham wrote:

Can you point me to the discussion of usecases that led to this design?


Me personally, no. I wasn't involved in either the spec or the Gecko
impl; I'm just reading the code


Sorry; s/you/anyone/

(I also meant *except* as a loose proxy for 'type of connection')


Re: [whatwg] Bandwidth media queries

2012-05-21 Thread James Graham



On Mon, 21 May 2012, Mounir Lamouri wrote:


On 05/21/2012 04:34 PM, Boris Zbarsky wrote:

On 5/21/12 10:09 AM, Mounir Lamouri wrote:

On 05/20/2012 03:04 PM, Boris Zbarsky wrote:

On 5/20/12 5:45 AM, Paul Irish wrote:

Since no one mentioned it, I just wanted to make sure this thread is
aware
of the Network Information API [1], which provides
navigator.connection.bandwidth

It's been recently implemented (to some degree) in both Mozilla [2] and
Webkit [3].


As far as I can tell, the Mozilla implementation always returns Infinity
for .bandwidth.


This is not true. There is an implementation for Firefox Android which
is based on the connection type.


Ah, indeed.  I had missed that codepath.

If I'm reading the right code now, that looks like it returns a constant
value for each connection type (e.g. if you're connected via Ethernet or
Wifi it returns 20; if you're connected via EDGE it returns 0.2, etc).


The idea is that the specification allows the implementation to be
trivial and improve without changing the specification.


That seems incredibly unlikely to work in practice. Early implementations 
will return 20 or 0.2 and sites will do


if (bandwidth == 20) {
  //get high quality site
} else {
  //get simplified site
}

and users will be very surprised when upgrading their browser causes them 
to get the simplified site or low quality assets when before they never 
did.



And that
implementation is good enough for web pages to know if the user is in a
slow or fast connection without giving the connection type and leaking
information.


I think the fundamental problem with this API isn't that it might leak 
information, it's that it's that it is quite likely to make the overall 
user experience worse rather than better. It is also extremely difficult 
to implement in a really good way as evidenced by the fact that all the 
implementations so far are extremely half-hearted.


Re: [whatwg] Bandwidth media queries

2012-05-20 Thread James Graham

On Sun, 20 May 2012, Boris Zbarsky wrote:


On 5/20/12 5:45 AM, Paul Irish wrote:

Since no one mentioned it, I just wanted to make sure this thread is aware
of the Network Information API [1], which provides
navigator.connection.bandwidth

It's been recently implemented (to some degree) in both Mozilla [2] and
Webkit [3].


As far as I can tell, the Mozilla implementation always returns Infinity for 
.bandwidth.


And this is perfectly compliant, since the spec says:

   The user agent must set the value of the bandwidth attribute to:

   0 if the user is currently offline;
   Infinity if the bandwidth is unknown;
   an estimation of the current bandwidth in MB/s (Megabytes
   per seconds) available for communication with the browsing
   context active document's domain.


If no one is planning on implementing this feature in a meaningful way, 
why is it in the spec?


(yes I know this is not exactly the right list).


Re: [whatwg] Features for responsive Web design

2012-05-18 Thread James Graham

On 05/18/2012 12:16 PM, Markus Ernst wrote:


2. Have there been thoughts on the scriptability of @srcset? While
sources can be added to resp. removed from picture easily with
standard DOM methods, it looks to me like this would require complex
string operations for @srcset.


Are there any use cases that benefit from scripting here? I wouldn't be 
surprised if there are, but whoever thinks they will have such use cases 
should state them clearly so that the design takes them into account.


Re: [whatwg] Correcting some misconceptions about Responsive Images

2012-05-17 Thread James Graham

On Wed, 16 May 2012, Glenn Maynard wrote:


On Wed, May 16, 2012 at 8:35 PM, Maciej Stachowiak m...@apple.com wrote:


 The downside of the CG as executed is that it was much less successful
in attracting browser implementor feedback (in part because it was
apparently not advertised in places frequented by browser standards
people). So the implementor feedback only got applied later, and without
full knowledge and understanding of the CGs efforts. It's not useful to
have a standards process that doesn't include all the essential
stakeholders.



This isn't a new suggestion, but worth repeating: starting a CG is fine,
but *do not make a new mailing list*.  Hold discussions on a related
monolithic list (like here or webapps), with a subject line prefix.  Making
lots of isolated mailing lists only ensures that the people you'd want
paying attention won't be, because the overhead of subscribing to a list
(for a subject people may only have passive interest in) is much higher
than skimming a thread on whatwg or webapps-public that people are already
on.


FWIW I think that forming community groups that are limited in scope to 
gathering and distilling the relevant use cases could be a functional way 
of working. For example if, in this case, people had said we will form a 
group that will spend 4 weeks documenting and prioritising all the use 
cases that a responsive images feature needs to cover and then the 
results of that work had been taken to a forum where browser implementors 
are engaged (e.g. WHATWG), I think we would have had a relatively smooth 
ride toward a universially acceptable solution.


Of course there are disadvantages to this fragmentary approach compared to 
having one centralised venue, but it has the advantages too; notably 
people are more likely to subscribe to a low-traffic mailing list that 
just covers one feature they care about then subscribe to the WHATWG 
firehose. It also wouldn't require the unscalable solution of having 
vendors subscribe to one list per feature (fragmentation in this direction 
is already a huge problem at e.g. W3C and means that obvious problems with 
specs get missed until late in the game because people with the right 
expertise aren't subscribed to the correct lists).


Re: [whatwg] Bandwidth media queries

2012-05-16 Thread James Graham

On Wed, 16 May 2012, Matthew Wilcox wrote:


First off I know that a number of people say this is not possible. I
am not wanting to argue this because I don't have the knowledge to
argue it - but I do want to understand why, and currently I do not.
Please also remember that I can only see this from an authors
perspective as I'm ignorant of the mechanics of how these things work
internally.

The idea is to have something like:

link media=min-bandwidth:0.5mps ... /
link media=min-bandwidth:1mps ... /
link media=min-bandwidth:8mps ... /


Without going deeper into the specific points, implementation experience 
suggests that even implementing a binary low-bandwidth/high bandwidth 
detection is extremely difficult; Opera has one coupled to the UI for the 
turbo feature and it has been somewhat non-trivial to get acceptable 
quality.


In general the problem with trying to measure something like bandwidth is 
that it is highly time-variable; it depends on a huge number of 
environmental factors like the other users/applications on the same 
connection, possible browser features like down-prioritising connections 
in background tabs, external environmental features like the train just 
went into a tunnel or I just went out of range of WiFi and switched to 
3G and any number of other things. Some of those are temporary 
conditions, some are rapid changes to a new long-term state. Trying to 
present a single number representing this complexity in realtime just 
isn't going to work.


Re: [whatwg] Implementation complexity with elements vs an attribute (responsive images)

2012-05-13 Thread James Graham



On Sun, 13 May 2012, David Goss wrote:


A common sentiment here seems to be that the two proposed responsive
image solutions solve two different use cases:

- img srcset for serving different resolutions of a content image
(for bandwidth and dpi)
- picture for serving different versions of a content image (for art
direction)

...and that neither solution can deal with both issues. I disagree. I
would describe it as a single, broad use case:
Serving different sources of an image based on properties of the
client. These properties could include:
- Viewport width/height
- Containing element width/height
- Device orientation
- Colour capability
- Old-fashioned media type (screen/print)
- Connection speed
- Pixel density
- Things we haven't thought about/aren't an issue yet


Which of hese things are actual requirements that people need to meet and 
which are hypothetical? For example I think it is uncontroversial that 
viewport width/height is a real requirement. On the other hand, I have 
never heard of a site that switches assets based on display colour 
capability. Can you point to sites actually switching assets based on each 
property you listed?


Also note that there is a great difference in implementation complexity 
between various properties above. For example, viewport width/height is 
rather easy to work with because one can assume it won't change between 
prefetching and layout, so one can prefetch the right asset. On the other 
hand switching based on containing element width/height requires layout to 
happen before the right asset can be selected, so it has to be loaded 
late. This will significantly decrease the perceived responsiveness of the 
site.


Other properties like connection speed are very difficult to work with 
because they can have high temporal variability e.g. due to sharing of one 
connection by many consumers, due to temporary environmental conditions 
(train goes into a tunnel) or due to switching transports (wifi to 3G, for 
example). My suspicion is that trying to write a solution for switching 
based on connection speed would lead to people getting the wrong assets 
much of the time.


Note that these concerns argue, to a certian extent, *against* reusing a 
very general syntax that can express constraints that aren't relevant to 
the actual use cases, or that provide an attractive nuisance that 
encourages developers to do things that can't be implemented in a 
performant way.


Re: [whatwg] Implementation complexity with elements vs an attribute (responsive images)

2012-05-12 Thread James Graham

On Sat, 12 May 2012, Boris Zbarsky wrote:


On 5/12/12 9:28 AM, Mathew Marquis wrote:
While that information may be available at the time the img tag is parsed, 
I don’t believe it will be available at the time of prefetching


Which information?

At least in Gecko, prefetching happens when the tag is parsed.

So in fact in Gecko the srcset approach would be much more amenable to 
prefetching than the picture approach.


Yes, I should have mentioned that is also true in the various types of 
optimistic resource loading/parsing that Opera does e.g. the delayed 
script execution mode and the speculative tokenisation feature.

Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-22 Thread James Graham

On 03/21/2012 04:53 PM, Joshua Bell wrote:


As for the API, how about:


  enc = new Encoder(euc-kr)
  string1 = enc.encode(bytes1)
  string2 = enc.encode(bytes2)
  string3 = enc.eof() // might return empty string if all is fine

And similarly you would have

  dec = new Decoder(shift_jis)
  bytes = dec.decode(string)

Or alternatively you could have a single object that exposes both encode()
and decode() and tracks state for both:

  enc = new Encoding(gb18030)
  bytes1  = enc.decode(string1)
  string2 = enc.encode(bytes2)




I don't mind this API for complex usecases e.g. streaming, but it is 
massive overkill for the simple common case of I have a list of bytes 
that I want to decode to a string or I have a string that I want to 
encode to bytes. For those cases I strongly prefer the earlier API 
along the lines of


String.prototype.encode(encoding)
ArrayBufferView.prototype.decode(encoding)


Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-16 Thread James Graham



On Fri, 16 Mar 2012, Glenn Maynard wrote:


On Fri, Mar 16, 2012 at 11:19 AM, Joshua Bell jsb...@chromium.org wrote:


And just to be clear, the use case is decoding data formats where string
fields are variable length null terminated.



A concrete example is ZIP central directories.

I think we want both encoding and destination to be optional. That leads us

to an API like:

out_dict = stringEncoding.encode(string, opt_dict);

.. where both out_dict and opt_dict are WebIDL Dictionaries:

opt_dict keys: view, encoding





out_dict keys: charactersWritten, byteWritten, output



The return value should just be a [NoInterfaceObject] interface.
Dictionaries are used for input fields.

Something that came up on IRC that we should spend some time thinking
about, though: Is it actually important to be able to encode into an
existing buffer?  This may be a premature optimization.  You can always
encode into a new buffer, and--if needed--copy the result where you need it.

If we don't support that, most of this extra stuff in encode() goes away.


Yes, I think we should focus on getting feature parity with e.g. python 
first -- i.e. not worry about decoding into existing buffers -- and add 
extra fancy stuff later if we find that there are actually usecases where 
avoiding the copy is critical. This should allow us to focus on getting 
the right API for the common case.



If in-place decoding isn't really needed, we could have:

newView = str.encode(utf-8); // or {encoding: utf-8}
str2 = newView.decode(utf-8);
len = newView.find(0); // replaces stringLength, searching for 0 in the
view's type; you'd use Uint16Array for UTF-16

and encodedLength() would go away.


This looks like a big win to me.


Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-16 Thread James Graham

On Fri, 16 Mar 2012, Charles Pritchard wrote:


On 3/16/2012 2:17 PM, Boris Zbarsky wrote:

On 3/16/12 5:12 PM, Joshua Bell wrote:

FYI, there was some follow up IRC conversation on this. With Typed Arrays
as currently specified - that is, that Uint16Array has platform endianness


For what it's worth, it seems like this is something we should seriously 
consider changing so as to make the web-visible endianness of typed arrays 
always be little-endian.  Authors are actively writing code (and being 
encouraged to do so by technology evangelists) that makes that assumption 
anyway


The DataView set of methods already does this work. The raw arrays are 
supposed to have platform endianness.


If you see some evangelists skipping the endian check, send them an e-mail 
and let them know.


Not going to work.

You can't evangelise people into making their code work on architectures 
that they don't own. It's hard enough to get people to work around 
differences between browsers when all the browsers are avaliable for free 
and run on the platforms that they develop on.


The reality is that on devices where typed arrays don't appear LE, content 
will break.


Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-14 Thread James Graham

On 03/14/2012 12:38 AM, Tab Atkins Jr. wrote:

On Tue, Mar 13, 2012 at 4:11 PM, Glenn Maynardgl...@zewt.org  wrote:

The API on that wiki page is a reasonable start.  For the same reasons that
we discussed in a recent thread (
http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1589.html),
conversion errors should use replacement (eg. U+FFFD), not throw
exceptions.


Python throws errors by default, but both functions have an additional
argument specifying an alternate strategy.  In particular,
bytes.decode can either drop the invalid bytes, replace them with a
replacement char (which I agree should be U+FFFD), or replace them
with XML entities; str.encode can choose to drop characters the
encoding doesn't support.


For completeness I note that python also allows user-provided custom 
error handling. I'm not suggesting we want this, but I would strongly 
prefer it to providing an XML-entity-encode option :)


Re: [whatwg] RWD Heaven: if browsers reported device capabilities in a request header

2012-02-06 Thread James Graham

On Mon 06 Feb 2012 05:00:55 PM CET, Boris Zbarsky wrote:

On 2/6/12 10:52 AM, Matthew Wilcox wrote:

1) client asks for spdy://website.com
2) server responds with content and adds a request bandwidth device
screen size header


Again, the screen size is not invariant during the lifetime of a 
page. We should not be encouraging people to think that it is


No, but there is a different *typical* screen size/resolution for 
mobile/tablet/desktop/tv and it is common to deliver different content 
in each of these scenarios. Although people could load the same site on 
desktop and mobile set up to have the same viewport dimensions, it is 
not that probable and, only one of the two is likely to be resized.


A typical thing that people want to do is to deliver and display *less* 
content in small (measured in arcseconds) screen scenarios. If you are 
only going to show a subset of the full content it would be nice to 
only do a subset of the backend work (database queries + etc.) and 
transfer a subset of the full data. At the moment this is possible, but 
you pay for it with an extra RTT (at least as far as I can tell). I am 
sympathetic to the view that it would be desirable to be able to 
minimise the cost of generating a reduced-functionality page without 
burning the savings on extra round trips.


Re: [whatwg] RWD Heaven: if browsers reported device capabilities in a request header

2012-02-06 Thread James Graham

On Mon, 6 Feb 2012, Boris Zbarsky wrote:


On 2/6/12 11:42 AM, James Graham wrote:


Sure.  I'm not entirely sure how sympathetic I am to the need to produce 
reduced-functionality pages...  The examples I've encountered have mostly 
been in one of three buckets:


1) Why isn't the desktop version just like this vastly better mobile one?
2) The mobile version has a completely different workflow necessitating a 
different url structure, not just different images and CSS
3) We'll randomly lock you out of features even though your browser and 
device can handle them just fine


The example I had in mind was one of our developers who was hacking 
an internal tool so that he could use it efficiently on his phone.


AFAICT his requirements were:
1) Same URL structure as the main site
2) Less (only citical) information on each screen
3) No looking up / transfering information that would later be thrown away
4) Fast = No extra round trip to report device properties

AFAIK he finally decided to UA sniff Opera mobile. Which is pretty sucky 
even for an intranet app. But I didn't really have a better story to 
offer him. It would be nice to address this kind of use case somehow.


Re: [whatwg] Proposal for autocompletetype Attribute in HTML5 Specification

2012-01-26 Thread James Graham

On 12/15/2011 10:17 PM, Ilya Sherman wrote:

To that end we would like to propose adding an autocompletetype attribute
[1] to the HTML5 specification,


This name is very verbose. Isn't there something shorter — for example 
fieldtype — that we could use instead?


Re: [whatwg] Use of media queries to limit bandwidth/data transfer

2011-12-08 Thread James Graham



On Thu, 8 Dec 2011, Boris Zbarsky wrote:


On 12/8/11 3:56 PM, Tab Atkins Jr. wrote:

Remember that widths refer to the
browser window, not the monitor


For the 'width' and 'height' media queries, yes.

For the 'device-width' and 'device-height' media queries, no.


It's not clear that device-width and device-height should be encouraged 
since they don't tell you anything about how much content area is 
*actually* visible to the user.


Re: [whatwg] Proposal: intent tag for Web Intents API

2011-12-06 Thread James Graham

On Tue, 6 Dec 2011, Anne van Kesteren wrote:

Especially changing the way head is parsed is 
hairy. Every new element we introduce there will cause a body to be implied 
before it in down-level clients. That's very problematic.


Yes, I consider adding new elements to head to be very very bad for this 
reason. Breaking DOM consistency between supporting and non-supporting 
browsers can cause adding an intent to cause unrelated breakage (e.g. by 
changing document.body.firstChild).


Re: [whatwg] Proposal: intent tag for Web Intents API

2011-12-06 Thread James Graham

On Tue, 6 Dec 2011, James Hawkins wrote:


On Tue, Dec 6, 2011 at 1:16 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

On Tue, Dec 6, 2011 at 1:14 PM, James Hawkins jhawk...@google.com wrote:

Originally we envisioned using a self-closing tag placed in head for
the intent tag; however, we're now leaning towards not using
self-closing and having the tag be placed in the body with fallback
content, e.g., to install an extension to provide similar
functionality.

intent action=webintents.org/share
 Click here to install our extension that implements sharing!
/intent

What are your thoughts on this route?


So, when the intent tag is supported, it's not displayed at all, and
instead solely handled by the browser?  This seems okay to me.



Correct.



This seems to remove my major objection to the new tag design.

[whatwg] Constructors for HTML Elements

2011-11-07 Thread James Graham
There seems to be some interest in making all concrete interfaces in the 
DOM constructible (there also seems to be some interest in making 
abstract interfaces constructible, but that seems insane to me and I 
will speak no further of it).


This presents some special difficulties for HTML Elements as there is 
not generally one interface per tag (e.g. HTMLHeadingElement is used for 
h1-h6) and making all zero-argument constructors work seems like a more 
natural API than sometimes having to say 'new HTMLDivElement()' and 
sometimes having to say 'new HTMLHeadingElement(h1)'. So the question 
is whether we can change this without breaking compat. The only problem 
I foresee is that adding new interfaces would change stringification. 
But I think it is possible to override that where needed.


Re: [whatwg] Constructors for HTML Elements

2011-11-07 Thread James Graham

On Mon, 7 Nov 2011, Michael A. Puls II wrote:


On Mon, 07 Nov 2011 09:00:14 -0500, James Graham jgra...@opera.com wrote:

There seems to be some interest in making all concrete interfaces in the 
DOM constructible (there also seems to be some interest in making abstract 
interfaces constructible, but that seems insane to me and I will speak no 
further of it).


This presents some special difficulties for HTML Elements as there is not 
generally one interface per tag (e.g. HTMLHeadingElement is used for h1-h6) 
and making all zero-argument constructors work seems like a more natural 
API than sometimes having to say 'new HTMLDivElement()' and sometimes 
having to say 'new HTMLHeadingElement(h1)'. So the question is whether we 
can change this without breaking compat. The only problem I foresee is that 
adding new interfaces would change stringification. But I think it is 
possible to override that where needed.


You'd have to do HTMLUnkownElement(name) anyway, so new 
HTMLHeadingElement(name) wouldn't be bad.


I think it is quite acceptable to break HTMLUnknownElement.


But, what is the ownerDocument? Will it always be window.document I assume?


It would work like new Image; i.e. The element's document must be the 
active document of the browsing context of the Window object on which the 
interface object of the invoked constructor is found..


Anyway, I think it'd be great to have this. It wouldn't really solve a 
problem except for making code a tiny bit shorter. But, it's kind of 
something that seems like it should work (as in, makes sense, intuitive etc.)


FWIW the two cited reasons for wanting it to work are it makes the DOM 
feel more like other javascript and it helps us use element subclassing 
as part of the component model.


Re: [whatwg] Fullscreen Update

2011-10-31 Thread James Graham



On Sat, 29 Oct 2011, Robert O'Callahan wrote:


On Wed, Oct 19, 2011 at 11:57 PM, James Graham jgra...@opera.com wrote:


On 10/19/2011 06:40 AM, Anne van Kesteren wrote:

 Is that an acceptable limitation? Alternatively we could postpone the

nested fullscreen scenario for now (i.e. make requestFullscreen fail if
already fullscreen).



I think punting on this makes sense. Pages can detect the failure and do
something sane (make the element take the whole viewport size). If the
feature becomes necessary we can add it in v2.



I don't think punting on nested fullscreen is a good idea. It's not some
edge case that most applications can't hit. For example, it will come up
with any content that can go full-screen and can contain an embedded
Youtube video. (It'll come up even more often if browser fullscreen UI is
integrated with DOM fullscreen, which we definitely plan to do in Firefox.)
If we don't support nested fullscreen well, then the user experience will
be either
-- making the video fullscreen while the containing content is already
fullscreen simply doesn't work, or
-- the video can go fullscreen, but when you exit fullscreen on the video,
the containing content also loses fullscreen
Both of these are clearly broken IMHO.


Presumably the embeded video could detect that it was already in a 
fullscreen environment and deal with it accordingly. So in theory we could 
wait and see if people just do that before deciding that we have to 
implement the more complex thing. But that might be unnecessarily 
difficult and easy to get wrong. So maybe we should just deal with this 
now.


Re: [whatwg] Fullscreen Update

2011-10-19 Thread James Graham

On 10/19/2011 06:40 AM, Anne van Kesteren wrote:


Is that an acceptable limitation? Alternatively we could postpone the
nested fullscreen scenario for now (i.e. make requestFullscreen fail if
already fullscreen).


I think punting on this makes sense. Pages can detect the failure and do 
something sane (make the element take the whole viewport size). If the 
feature becomes necessary we can add it in v2.




Re: [whatwg] Node inDocument

2011-08-30 Thread James Graham

On 08/30/2011 10:44 AM, Anne van Kesteren wrote:

On Tue, 30 Aug 2011 10:38:19 +0200, Jonas Sicking jo...@sicking.cc wrote:

In general I think it's better to have functions that deal with child
lists on Node rather than on Element/Document/DocumentFragment.

I think it might still make sense to have inDocument though. That'll
allow people to more clearly express what they are actually trying to
do, while allowing implementations to write faster code.


If we are going to have Node.contains implementations surely could
optimize document.contains(node) which seems as clear as node.inDocument
to me.


They are different in the case of multiple documents though. Which 
solution makes sense given the use cases? What are the use cases?




Re: [whatwg] WebSocket framing

2011-08-22 Thread James Graham

On 08/22/2011 09:09 AM, Bronislav Klučka wrote:



On 21.8.2011 18:44, John Tamplin wrote:

On Sun, Aug 21, 2011 at 5:05 AM, Bronislav Klučka
bronislav.klu...@bauglir.com wrote:


Hello,
I'm looking at current WebSocket interface specification
http://www.whatwg.org/specs/**web-apps/current-work/**
complete/network.html#the-**websocket-interfacehttp://www.whatwg.org/specs/web-apps/current-work/complete/network.html#the-websocket-interface


1/ and I'm missing the ability to specify, whether data to be sent are
final or not (whether the frame to be sent should be continuous or not)
I suppose some parameter for specifying final/continuing frame should be
there, or some new methods for framing. I've tried to send 100 MiB
text from
Chrome for testing purposes and since there is no way to specify
framing it
was send in one piece. There is no way to stream data from client to
server, all data must be kept in memory and sent at once.


The JS API is entirely at the message level -- the fact that it might be
fragmented into frames is an implementation detail of the browser and the
rest of the network.

That was not the point, the point was that current WebSocket JS API does
not allow streaming of data either from browser or from server (browser
sends data as one frame, and when receiving exposes to the programmer
the whole message at once regardless of how server sent the message). As
persistent and low latency connection, WS would be perfect for streaming
data. But due to this JS API inability to handle frames, one has to
implement streaming on server and client all over again even though the
protocol can be used.
So the point is, couldn't JS API use the ability of the protocol to ease
streaming implementation?


I imagine that at some point in the future we will expose an interface 
that is optimised for streaming data over websockets. But it seems 
foolhardy to do that before we have any real-world experience with the 
basic API. It is also something that will need to be designed to 
integrate with other streaming APIs in the platform, e.g. the audio and 
video stuff that is currently being discussed (mostly elsewhere).


Re: [whatwg] window.status and window.defaultStatus

2011-07-25 Thread James Graham

On 07/25/2011 05:30 AM, Bjartur Thorlacius wrote:

Are JavaScript implementors willing to reimplement window.status? There
are obvious security problems with drawing an author-provided string
where a certain URI is expected, but could window.defaultStatus not set
the name (_NET_WM_NAME or equivalent) of the script's window and
window.status either override window.defaultStatus temporarily, or sent
to the user, e.g. through Growl or as a Windows toast.
The window name is already accessible to scripts (by modifying the text
child of title through the DOM) so no new security concerns are
introduced. The Growl binding might well be better by a new function,
though.


If you want OS-level notifications you might be interested in [1]

[1] http://dev.w3.org/2006/webapi/WebNotifications/


Re: [whatwg] Hashing Passwords Client-side

2011-06-20 Thread James Graham

On 06/17/2011 08:34 PM, Aryeh Gregor wrote:

On Thu, Jun 16, 2011 at 5:39 PM, Daniel Chengdch...@chromium.org  wrote:

A variation of this idea has been proposed in the past but was largely seen
as undesirable--see
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-May/026254.html. In
general, I feel like the same objections are still true of this proposal.


This proposal is considerably better formulated than that one was.
But yes, in the end, the only real benefit is that the user can
confirm that their original plaintext password can only be retrieved
by brute-forcing the hash, which protects them only against reuse of
the password on different sites.  So on consideration, it will
probably lead more to a false sense of security than an actual
increase in security, yes.  It no longer seems like a good idea to me.


FWIW I disagree. The same argument could be used against client-side 
form validation since some authors might stop doing proper server-side 
validation. But, as in that case, there are definite end user benefits — 
I consider limiting the scope of attacks to just a single site even in 
the face of password reuse to be a substantial win — and the authors who 
are most likely to get the server-side wrong are the same ones who are 
already storing passwords in plain text.


Re: [whatwg] summary and details elements' specification

2011-04-11 Thread James Graham

On 04/11/2011 03:40 PM, Tomasz Jamroszczak wrote:


I've got another proposal for making summary and details easier to
implement and - what's more important - easier to understand and thus
easier to use.
Instead of making summary inside details working as legend inside
fieldset, we can throw away the details tag and make summary work
like label element with form element. There's no need for open
attribute, instead already existing hidden attribute can be used on
any HTML element. Clicking on summary adds or removes hidden
attribute from element with given id.

Here's default UA style:

summary {
display: list-item;
list-style-type: -o-disclosure-closed;
}
[hidden] {
display: none;
}

Here's example HTML:

summary for=detailsIdThis is summary. Click to show/close
details./summary
div id=detailsId hidden.../div


That seems much harder for authors. In particular having to maintain the 
id references would be harder, not just for copy-paste. That sort of 
structure should only be used when it is really needed for the extra 
flexibility, but in this case is is clearly not; the possibility of 
having multiple elements between the disclosure element and the thing it 
discloses seems unnecessary.


I also imagine that in this case the author would typically wrap the 
whole structure in an extra div to get the grouping they want. They 
would then style the element in such a way as to reproduce something 
like the default styling proposed for details. So that part would 
shift complexity from implementors to authors, which is bad.



Pros:
1. Simple to understand by web page authors - its representation and
semantics play well together.


fwiw I find this much harder to understand; it depends on lots of low 
level mechanics rather than on a simple declaration.


Re: [whatwg] Styling details

2011-04-08 Thread James Graham

On 04/07/2011 05:55 PM, Tab Atkins Jr. wrote:

On Thu, Apr 7, 2011 at 6:09 AM, Lachlan Huntlachlan.h...@lachy.id.au  wrote:



3. We'd like to get some feedback from web developers, and agreement from
other browser vendors, about exactly which glyphs are most appropriate to
use for these disclosure states.  We considered two alternatives, but we
think these three glyphs are the most appropriate.

U+25B8 (▸) BLACK RIGHT-POINTING SMALL TRIANGLE
U+25C2 (◂) BLACK LEFT-POINTING SMALL TRIANGLE
U+25BE (▾) BLACK DOWN-POINTING SMALL TRIANGLE


Yup, looks good.


FWIW I don't think we need cross-browser agreement here. In particular I 
think browsers should be free to implement details using a 
platform-native disclose widget if they like. These are not all alike 
e.g. OSX uses something like ▸, Windows something like [+] (I think?) 
and Gnome (at least with the skin I have) something like ▷.


Re: [whatwg] WebSockets and redirects

2011-03-30 Thread James Graham

On 03/30/2011 12:12 AM, Jonas Sicking wrote:

But I'm totally fine with punting on this for the future and just
disallowing redirects on an API level for now.


Yes, I think this is the right thing to do at the moment.


Re: [whatwg] details, summary and styling

2011-03-29 Thread James Graham

On 03/29/2011 03:27 PM, Wilhelm Joys Andersen wrote:

Hi,

I'm currently writing tests in preparation for Opera's implementation
of details and summary. In relation to this, I have a few questions
about issues that, as far as I can tell, are currently undefined in the
specification.

The spec says:

If there is no child summary element [of the details element], the
user agent should provide its own legend (e.g. Details). [1]

How exactly should this legend be provided? Should the user agent add
an implied summary element to the DOM, similar to tbody, a
pseudo-element, or a magic non-element behaving differently from both
of the above?


FWIW I think that, from a spec point of view, it should just act as if 
the first block box container in the shadow tree contained some 
UA-provided text i.e. no magic parser behavior.



This indicates that it is slightly more magic than I would prefer. I
believe a closer resemblance to an ordinary element would be more
convenient for authors - a ::summary pseudo element with Details as
its content() might be the cleanest approach, although that would
require a few more bytes in the author's stylesheet to cater to both
author- and UA-defined summaries:

summary, ::summary {
color: green;
}


::summary could be defined to just match the first block box element in 
the details shadow tree. That way you could just write


::summary {color:green}

for both cases. I note that optimising for the non-conforming case seems 
a bit unnecessary, however.



That's a rather small clickable area, which might get troublesome to hit
on a fuzzy touchscreen or for someone with limited motor skills. I suggest
the whole block area of summary, too, is made clickable - as if it was
a label for the ::marker.


Making the whole ::summary clickable would seem consistent with the rest 
of the platform where labels are typically clickable.


Re: [whatwg] Interpretation issue: can section be used for extended paragraphs?

2011-03-10 Thread James Graham

On 03/10/2011 09:20 AM, Jukka K. Korpela wrote:


My question is: Is this acceptable use of the SECTION element, even in a
flow that mostly consists of P elements, not wrapped inside SECTION
elements of their own?


If I understand you correctly, it is not the intended use of section — 
i.e. section conveys a different semantic to the one that you want — 
and could have a number of undesirable consequences. In particular it 
would insert a (presumably untitled) entry into the document outline.


I don't think a solution to your problem currently exists. I am somewhat 
skeptical that a solution is urgently required (that is, I don't think I 
have used a tool that *actually* fails if I have to split a paragraph to 
accommodate a list).


Re: [whatwg] Optional non-blocking mode for simple dialogs (alert, confirm, prompt).

2011-03-01 Thread James Graham

On 03/01/2011 04:50 PM, Ben Rimmington wrote:


However, some mobile platforms have a local notification service [3]
[4] [5] [6]. A new window.notify() function might be useful, so that
a background card/tab/window can display a message to the user.


See [1] for the current state-of-play in giving access to system 
notification mechanisms.


[1] 
http://dev.w3.org/2006/webapi/WebNotifications/publish/Notifications.html


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread James Graham

On 02/11/2011 04:40 PM, Nicholas Zakas wrote:

We've gone back and forth around implementation specifics, and now
I'd like to get a general feeling on direction. It seems that enough
people understand why a solution like this is important, both on the
desktop and for mobile, so what are the next steps?


I think the first step would be to produce some performance data to 
indicate the actual bottleneck(s) in different configurations (browsers, 
devices, scripts, etc.). Unless I missed something (quite possible, the 
thread has been long), the only data so far presented has been some 
hearsay about gmail on some unknown hardware/browser combination.


Re: [whatwg] Web DOM Core feedback

2011-01-14 Thread James Graham

On 01/13/2011 10:05 PM, Aryeh Gregor wrote:


In defining the interface for Node, some of the attributes are defined
like The parentElement attribute must return the parent node of the
context node if there is a parent and it is an Element node, or null
otherwise. while others are defined like

The parentNode attribute must run these steps:

1. If the context node does not have a parent node, return null and
terminate these steps.
2. Return the parent node of the context node.

They seem to be equivalent, but the first way is shorter.


IMHO the second is clearer (I also note that they do not seem to be 
equivalent in this specific case).



There are a bunch of places where it says When invoked with the same
argument the same NodeList object may be returned as returned by an
earlier call.  Shouldn't this be either required or prohibited in any
given case, not left undefined?


It seems like making this a requirement would interact badly with GC 
e.g. if I have some call that produces a huge NodeList that is then not 
referenced, I don't want to keep it around just in case some future 
script returns the same NodeList. On the other hand, there are scripts 
that put calls returning identical NodeLists in inner loops. In these 
cases not recreating the object every time is a big performance win.


Re: [whatwg] WebSRT feedback

2010-10-07 Thread James Graham

On 10/06/2010 04:04 AM, Philip Jägenstedt wrote:


As an aside, the idea of using an HTML parser for the cue text wasn't
very popular.


Why? Were any technical reasons given?



Finally, some things I think are broken in the current WebSRT parser:


One more from me: the spec is unusually hard to follow here since it 
makes extensive use of goto for flow control. Could it not be 
restructured as a state machine or something so it is easier to follow 
what is going on?


Re: [whatwg] input element's value should not be sanitized during parsing

2010-09-21 Thread James Graham

On Mon, 20 Sep 2010, Mounir Lamouri wrote:


Hi,

For a few days, Firefox's nightly had a bug related to value sanitizing
which happens to be a specification bug.
With the current specification, these two elements will not have the
same value:
input value=foo#13;bar type='hidden'
input type='hidden' value=foo#13;bar
Depending on how the attributes are read, value will be set before or
after type, thus, changing the value sanitization algorithm. So, the
value sanitization algorithm of input type='text' will be used for one
of these elements and the value will be foobar.

The following change would fix that bug:
- The specification should add that the value sanitization algorithm
should not be used during parsing/as long as the element hasn't been
created.
OR
- The specification should add in the set value content attribute
paragraph that the value sanitization algorithm should not be run during
parsing/if the element hasn't been created.

For a specification point of view, both changes would have the same result.

The specifications already require that the value sanitization algorithm
should be run when the element is first created.
So, with this change, the element's value will be un-sanitized during
parsing and as soon as the parsing will be done, the element's value
will be sanitized.


The concept of Creating an Element already exists [1] and is atomic, 
that is the element is created with all its attributes in a single 
operation. Therefore it is not clear to me how attribute order can make a 
difference per spec. Am I missing your point?


[1] 
http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#creating-and-inserting-elements


Re: [whatwg] input element's value should not be sanitized during parsing

2010-09-21 Thread James Graham

On 09/21/2010 10:12 AM, Boris Zbarsky wrote:

On 9/21/10 4:06 AM, James Graham wrote:

The concept of Creating an Element already exists [1] and is atomic,


Where does it say that it's atomic? I don't see that anywhere (and in
fact, the create an element code in the Gecko parser is most decidedly
non-atomic). Now maybe the spec intends this to be an atomic operation;
if so it needs to say that.


It is described as a single step in the spec, which I take to imply that 
it should behave as a single operation from the point of view of the 
rest of the spec. Of course I am not against this being made clearer.


[whatwg] Communicating between different-origin frames

2010-07-14 Thread James Graham
Following some discussion of [1], it was pointed out to me that it is 
possible to make two pages on separate subdomains communicate without 
either setting their document.domain by proxing the communication 
through pages that have set their document.domain. There is a demo of 
this at [2].


I'm not sure if this is already well-known nor whether it is harmless or 
not.


[1] 
http://my.opera.com/hallvors/blog/2010/07/13/ebay-versus-security-policy-consistency

[2] http://sloth.whyi.org/~jl/cross-domain.html


[whatwg] keygen [was: Re: Headings and sections, role of H2-H6]

2010-05-01 Thread James Graham



On Sat, 1 May 2010, Nikita Popov wrote:

I do not deny, that keygen has it's use cases (the nobody was hyperbolic). 
I only think, that the use cases are *very* rare. It is overkill to introduce 
an HTML element therefore. It would be much more sane to provide a JS API (as 
Janos proposed.) [I would do it myself, but I have only very little knowledge 
on encryption.]


No one is introducing anything. It has already existed for years. We can 
either document it or force new entrants to the market to reverse engineer 
it on thier own. document reality rather than ignoring it is oe of the 
fundamental purposes of this effort.


Re: [whatwg] Headings and sections, role of H2-H6

2010-04-29 Thread James Graham

On 04/29/2010 01:47 AM, Jesse McCarthy wrote:


I see why H2-H6 are retained for certain uses, but -- except in an
HGROUP -- there's no good reason to use H2-H6 when writing new code with
explicitly marked-up sections, is there?


Support for legacy clients (e.g. current AT) that has not been updated 
to understand the HTML5 outline algorithm.


Re: [whatwg] Dealing with Stereoscopic displays

2010-04-28 Thread James Graham

On 04/28/2010 10:39 AM, Eoin Kilfeather wrote:

Well, I agree that the web author shouldn't worry about how it is
achieved, but would it not be the case that the author needs to indicate
which view is for which display? That is to say the author would be
required to flag the output for correct routing to the virtual
display. Is it beyond the scope to the specification to indicate a
normative way of doing this?


I think the idea is that rather than the author manually producing 
different content for each display, the 3D positional information in the 
underlying format (e.g. WebGL) would be used by the browser to 
automatically create a 3D view on the hardware available.


Re: [whatwg] Adding ECMAScript 5 array extras to HTMLCollection (ATTN IE TEAM - TRAVIS LEITHEAD)

2010-04-28 Thread James Graham

On 04/28/2010 10:27 AM, David Bruant wrote:


When I started this thread, my point was to define a normalized way
(through ECMAScript binding) to add array extras to array-like objects
in the scope of HTML5 (HTMLCollection and inheriting interfaces).
I don't see any reason yet to try to find a solution to problems that
are in current web browsers.
Of course, if/when a proposal emerges from this thread and some user
agent accept to implement it, a workaround (probably, feature detection)
will have to be found to use the feature in user agents that implement
it and doing something equivalent in web browsers that don't.


To be clear the proposals in this thread are pure syntactic sugar; they 
don't allow you do do anything that you can't already do like:


Array.prototype.whatever.call(html_collection, arg1, arg2, ...)

where whatever is the array method you are interested in.

Of course there is nothing wrong with making the syntax more natural if 
it can be done in a suitably web-compatible way. However it seems more 
sensible to do this at a lower level e.g. as part of Web DOM Core. Sadly 
that spec is in need of an editor.


Re: [whatwg] article/section/details naming/definition problems

2009-09-16 Thread James Graham

Keryx Web wrote:

2009-09-16 03:08, Ian Hickson skrev:


I'd like to renamearticle, if someone can come up with a better word
that means blog post, blog comment, forum post, or widget. I do think
there is an important difference between a subpart of a page that is
a potential candidate for syndication, and a subsection of a page that
only makes sense with the rest of the page.

Cheers,


Has entry been discussed? (Shamelessly stolen from Atom.)


Dunno about discussed, but I had the same idea*. It seems like it might 
help people understand where article is supposed to be used since 
articles are used in cases where content could stand alone, for example 
in syndication.



*No really, check the IRC logs ;)



Re: [whatwg] [html5] r3820 - [e] (0) step/min/max examples.

2009-09-13 Thread James Graham

Quoting Simon Pieters sim...@opera.com:


On Sun, 13 Sep 2009 10:52:18 +0200, Ian Hickson i...@hixie.ch wrote:


s/2000/1999/


Since when?


Oops. I thought the 21st century started 2000, but it seems I was wrong.


Since almost everyone uses the zero-based-century convention it would  
be much less confusing to simply use a different example in which the  
common and pedantic definitions don't conflict.




Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-31 Thread James Graham

Quoting Ian Hickson i...@hixie.ch:


On Tue, 25 Aug 2009, Jens Alfke wrote:
Potential result: I was having trouble logging into FooDocs.com,   
so my friend

suggested I delete the cookies for that site. After that I could log in, but
now the document I was working on this morning has lost all the changes I
made! How do I get them back?

I suggest that the sub-section Treating persistent storage as cookies of
section 6.1 be removed.


We can't treat cookies and persistent storage differently, because
otherwise we'll expose users to cookie resurrection attacks. Maintaining
the user's expectations of privacy is critical.


I think the paragraph under treating persistent storage as cookies  
should simply be removed. The remainder of that section already does  
an adequate job of explaining the privacy implications of persistent  
storage. The UI should be entirely at the discretion of the browser  
vendor since it involves a variety of tradeoffs, with the optimum  
solution depending on the anticipated user base of the browser.  
Placing spec requirements simply limits the abilities of browser  
vendors to find innovative solutions to the problem. In addition,  
since there is no interoperability requirement here, using RFC 2119  
language seems inappropriate; especially since the justification given  
is rather weak (this might encourage users?) and not supported by  
any evidence.


As to what browser vendors should actually _do_, it seems to me that  
the user's expectations of privacy is actually an illusion in this  
case; all the bad stuff that can be done with persistent storage can  
already be done using a variety of techniques. Trying to fix up this  
one case seems like closing the stable door after the horse has  
bolted. Therefore the delete local storage when you delete cookies  
model seems flawed, particularly as it can lead to the type of problem  
that Jens described above.


On a slightly different topic, it is unclear what the relationship  
between the statement in section 4.3 User agents should expire data  
from the local storage areas only for security reasons or when  
requested to do so by the user and the statement in section 6.1 User  
agents may automatically delete stored data after a period of time.  
is supposed to be. Does the latter count as a security reason?




Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-27 Thread James Graham

Adrian Sutton wrote:

On 27/08/2009 15:47, Maciej Stachowiak m...@apple.com wrote:

- Cached for convenience - discarding this will affect performance but not
functionality.
- Useful for offline use - discarding this will prevent some data from being
accessed when offline.
- Critical for offline use - discarding this will prevent the app storing this
data from working offline at all.
- Critical user data - discarding this will lead to permanent user data loss.


The only catch being that if the web app decides this for itself, a
malicious script or tracking cookie will be marked as critical user data
when in fact the user would disagree.

On the plus side, it would mean a browser could default to not allowing
storage in the critical user data by default and then let users whitelist
just the sites they want.  This could be through an evil dialog, or just a
less intrusive indicator somewhere ­ the website itself would be able to
detect that it couldn¹t save and warn the user in whatever way is most
appropriate.


I don't fancy having to explain to my Mum that she has to go through 
some complex (to her) sequence of operations to see if a site is storing 
her important data somewhere where it might be deleted or in some secure 
area. Nor do I fancy explaining the procedure for changing between one 
and the other. I don't really see how the site could help either. I 
guess it might be possible for it to put up a your data is stored in a 
non-persistent way message, but instructions to change to persistent 
storage would have to be per-browser and possibly per browser version; 
no good for the people who don't know the difference between the 
browser, the internet and google.


I can't imagine how to make this simple enough for end users without all 
data being persistent by default. Even then, knowing how to clear out 
data once the quota is hit is likely to be difficult and confusing.


Re: [whatwg] Serving up Theora video in the real world

2009-07-10 Thread James Graham

Robert O'Callahan wrote:

On Fri, Jul 10, 2009 at 7:36 PM, James Graham jgra...@opera.com wrote:


Is there a good reason to return the empty string rather than false? The
empty string seems very unhelpful to authors since it doesn't play nicely
with debugging prompts and is non-obvious to infer meaning from, which is
likely to confuse novices who are e.g. playing with the API in an
interactive console session.



Returning false makes it difficult to bind to languages other than JS. What
would you write for the return type in IDL?


Is there any good reason to worry about languages other than javascript? 
 Writing APIs that work well in the one language implemnented in web 
browsers seems better than writing mediocre APIs that can be used in 
many other languages. I'm not sure what is needed for IDL to cope with 
this though.




Re: [whatwg] Non-ecmascript bindings (was Re: Serving up Theora video in the real world)

2009-07-10 Thread James Graham

Quoting Kartikaya Gupta lists.wha...@stakface.com:

Really, it's not that much work to make sure the API can have   
bindings in other languages. As long as you can write WebIDL for it   
(and provide relevant DOM feature strings wherever necessary), you   
should get it for free. I would also argue that considering other   
languages forces you to think more about how the API may be (ab)used  
 and therefore results in a better and more robust API, even if it  
is  never actually implemented in other languages.


It's not about whether it is a lot of work; it's about whether the API  
matches the typical programming style and feature set of the target  
language. Where this doesn't happen (as in much of the DOM), the API  
ends up feeling clunky and difficult to use. My experience with  
dom-equivalent APIs that have been designed to fully take advantage of  
the target language capabilities is that they are much more pleasant  
to use than the equivalent DOM APIs. Indeed one of the first things  
that most javascript libraries do is replace most of the DOM  with  
their own API. This hardly seems like a ringing endorsement of the  
design strategy that gave us the DOM. I don't think it is sensible to  
optimise for the few people for whom the cross-language* approach is  
convenient at the expense of the many or whom it is bad.


Sadly it seems that canPlayType is going to be another hacky-feeling  
API because of cross-language considerations and because the problems  
with it were not picked up soon enough :(


*I idly note that DOM seems entirely unsuited to some languages so it  
is only really cross-language to the extent that it can be implemented  
in any language where you can mimic the style of java.




Re: [whatwg] [html5] Pre-Last Call Comments

2009-06-03 Thread James Graham

Kristof Zelechovski wrote:

Regarding
http://www.whatwg.org/specs/web-apps/current-work/multipage/infrastructure.
html#weeks:
A week begins on Sunday, not on Monday.


Not according to ISO [1]

[1] http://en.wikipedia.org/wiki/ISO_week_date



Re: [whatwg] on bibtex-in-html5

2009-06-02 Thread James Graham

Bruce D'Arcus wrote:

So exactly what is the process by which this gets resolved? Is there one?


Hixie will respond to substantive emails sent to this list at some 
point. However there are some hundreds of outstanding emails (see [1]) 
so the responses can take a while. If you have a pressing deadline that 
would benefit from your issue being addressed sooner, I suggest you talk 
to Hixie about it.


FWIW I have a few general thoughts about the bibtex section which may or 
may not be interesting:


1) It seems like this and similar sections (bibtex, vCard, iCalendar) 
could be productively split out of the main spec into separate normative 
documents, since they are rather self-contained and have rather obvious 
interest for communities who are unlikely to find them at present or to 
be interested in the rest of the spec. Although the drag and drop stuff 
being dependent on them does mean that you'd need some circular references.


2) For the bibliographic data the most important issues that I see are 
ease of use and ease of export. Although I am not attached to the bibtex 
format per-se I would be extremely disappointed if a different, harder 
to author, format were used. Formats that are flexible but rarely used 
are less useful overall than more limited formats with ubiquitous 
deployment. In addition formats that are hard to use make it more likely 
that people will make accidental mistakes, so decreasing the reliability 
of the data and devaluing tools that consume the data.


Although I don't think we have to use bibtex as the basis for the 
format, I do think a canonical mapping to bibtex is a requirement. 
Obviously this reflects my background in the physical sciences but, at 
least in that field LaTeX and, by association, bibtex are overwhelmingly 
popular. I am well aware that the situation in other fields is different 
but without clean, high fidelity, bibtex export (at least to the extend 
required to support common citation patterns within the physical 
sciences) the format will lose out on a large audience with a higher 
than average number of potential early adopters.


[1] http://www.whatwg.org/issues/data.html



On Sun, May 24, 2009 at 10:17 AM, Bruce D'Arcus bdar...@gmail.com wrote:

On Sat, May 23, 2009 at 5:35 PM, Ian Hickson i...@hixie.ch wrote:

...


I agree that BibTeX is suboptimal. But what should we use instead?

As I've suggested:

1) use Dublin Core.

This gives you the basic critical properties: literals for titles and
dates, and relations for versions, part/containers, contributors,
subjects.

You then have a consistent and general way to represent (HTML)
documents and embedded references to other documents, etc. (citation
references). This would cover the most important areas that BibTeX
covers.

2) this goes far, but you're then left with a few missing pieces for citations:

a. more specific contributors (like editors and translators)
b. identifiers (there's dc:identifier, but no way to explicitly denote
that it's a doi, isbn, issn, etc.)
c. what I call locators; volume, issue, pages, etc.
d. types (book, article, patent, etc.)

If there's some consensus on this basic way forward, we can talk about
details on 2.

Bruce





Re: [whatwg] Annotating structured data that HTML has no semantics for

2009-05-14 Thread James Graham

jgra...@opera.com wrote:

Quoting Philip Taylor excors+wha...@gmail.com:


On Sun, May 10, 2009 at 11:32 AM, Ian Hickson i...@hixie.ch wrote:


One of the more elaborate use cases I collected from the e-mails sent in
over the past few months was the following:

  USE CASE: Annotate structured data that HTML has no semantics for, and
  which nobody has annotated before, and may never again, for private 
use or

  use in a small self-contained community.

[...]

To address this use case and its scenarios, I've added to HTML5 a simple
syntax (three new attributes) based on RDFa.


There's a quickly-hacked-together demo at
http://philip.html5.org/demos/microdata/demo.html (works in at least
Firefox and Opera), which attempts to show you the JSON serialisation
of the embedded data, which might help in examining the proposal.


I have a *totally unfinished* demo that does something rather similar
at [1]. It is highly likely to break and/or give incorrect results**.
If you use it for anything important you are insane :)


I have now added extremely preliminary RDF support with output as N3 and 
 RDF/XML courtesy of rdflib. It is certain to be buggy.


Re: [whatwg] Spec should require UAs to have control to mute/ pause audio/ video

2009-05-07 Thread James Graham

Bruce Lawson wrote:

This may already be in the spec, but I couldn't find it.

I think the spec should explicity require UAs to provide a mehanism to
mute audio and to pause video, even if the controls attribute is not set.


This would not make sense in some situations e.g. for a UA designed to 
play in-store video adverts (which I hate by the way :) ). In general 
the spec does not and should not mandate or constrain UI, although it 
does sometimes make suggestions (in general I am rather against this 
because the people that are good at writing technical specifications are 
rarely the same people who are good at designing UI, so such suggestions 
are often not that great. And UI is an area in which browsers should be 
allowed to compete).


Re: [whatwg] native ordered dictionary data type in web storage draft

2009-04-14 Thread James Graham

Aryeh Gregor wrote:

On Tue, Apr 14, 2009 at 10:18 AM, Patrick Mueller
pmue...@muellerware.org wrote:

This is the first time I've seen the requirement for such a beast.  You can
understand the desire for it, given the context, but still.  Does anything
else in JavaScript make use of such a data structure?


It says that JavaScript should just use Object.  Isn't that,
essentially, an ordered dictionary?


Yes. Indeed there are compatibility requirements for the ordering of 
ordinary user-created Object Objects in web browser implementations; the 
order of enumeration must be the same as the order of insertion of the 
properties.


Re: [whatwg] About Descendent Tags

2009-04-07 Thread James Graham

Diego Eis wrote:

This is not correct in HTML4?
h1Romeo and Juliet/h1
h3a tragedy in Italian style/h3


If you fed that markup into a tool that produced the outline of the 
document (e.g. for a screen reader, a toc generator or an ordinary 
browser navigation aid), it would look something like


+Romeo and Juliet
+--+--a tragedy in Italian style

Which isn't right; there is no subsection of the document called a 
tragedy in Italian style. The idea of header is that you should be 
able to say:


header
h1Romeo and Juliet/h1
h3a tragedy in Italian style/h3
/header

And get an outline like:

+Romeo and Juliet

As I have pointed out elsewhere the header element appears to be very 
confusingly named, hence I advocate introducing hgroup for this use 
case and either using header to mean the generic top matter of the 
document or finding some other, less ambiguous, name to mean the same 
thing.


Re: [whatwg] Input type for phone numbers

2009-03-31 Thread James Graham

Markus Ernst wrote:
So, while e-mail addresses have a strictly defined format, this does not 
apply to phone numbers. Internationalisation would be necessary to 
validate them, and still it would be a hard task, as complete sets of 
valid formats might not be available for every country.


FWIW I would imagine that the most useful aspect of input type=tel 
or whatever would not be validation (because validation is hard) but 
would be better integration on mobile devices e.g. making the default 
action of the keypad be number keys, making phone numbers from the 
contacts list available, etc. (these were both pointed out already). 
Therefore whilst I totally recommend this feature be postponed for 
HTML6, I think it makes a lot of sense and that problems with validation 
are a red herring.


Re: [whatwg] C:\fakepath\ in HTML5

2009-03-24 Thread James Graham

Randy Drielinger wrote:
So instead of fixing the web, we're fixing the spec (and thus 
implementing fakepath in browsers)?


It's purely a question of what browser makers are prepared to implement. 
The spec has to reflect a consensus amongst browser makers so that it 
actualy gets implemented, otherwise it is no use to anyone. If you don't 
want the fakepath thing (and I agree it is ugly), try convincing the 
known-broken sites to change (citing the fact that they may break in 
Firefox could give you quite some leverage here). The fewer compat. 
reasons there are to keep the ugly solution, the easier it is to have 
the pretty solution.


Re: [whatwg] Historic dates in HTML5

2009-03-05 Thread James Graham

Philip Taylor wrote:


and make sure their stylesheets use the selector .time instead of
time, to guarantee everything is going to work correctly even with
unexpected input values.

So the restriction adds complexity (and bugs) to code that wants to be
good and careful and generate valid markup.



On the other hand the python datetime class doesn't seem to support 
years = 0 at all so consuming software written in python would have to 
re-implement the whole datetime module, potentially causing 
incompatibilities with third party libraries that expect datetimes to 
have year = 0. This seems like a great deal more effort than simply 
checking that dates are in the allowed range before serializing or 
consuming them in languages that do support years = 0.


Re: [whatwg] Video playback quality metric

2009-02-10 Thread James Graham

Jeremy Doig wrote:

Measuring the rate at which the playback buffer is filling/emptying gives a
fair indication of network goodput, but there does not appear to be a way to
measure just how well the client is playing the video itself. If I have a
wimpy machine behind a fat network connection, you may flood me with HD that
I just can't play very well. The cpu or video card may just not be able to
render the video well.Exposing a metric (eg: Dropped Frame count, rendered
frame rate) would allow sites to dynamically adjust the video which is being
sent to a client [eg: switch the url to a differently encoded file] and
thereby optimize the playback experience.
Anyone else think this would be good to have ?


It seems like, in the short term at least, the worse is better 
solution to this problem is for content providers to provide links to 
resources at different quality levels, and allow users to choose the 
most appropriate resource based on their internet connection and their 
computer rather than having the computer try to work it out for them. 
Assuming that the majority of users use a relatively small number of 
sites with the resources to provide multiple-quality versions of their 
videos and use a small number of computing devices with roughly 
unchanging network conditions (I imagine this scenario applies to the 
majority of non-technical), they will quickly learn which versions of 
the media works best for them on each site. Therefore the burden of this 
simple approach on end users does not seem to be very high.


Given this, I would prefer automatic quality negotiation be deferred to 
 HTML6.


Re: [whatwg] [html5] Semantic elements and spec complexity

2009-02-10 Thread James Graham


Since header is intended to be useful to make subheaders not appear in 
the ToC, the move from


  h1Foo/h1
  h2Bar/h2

to

  header
   h1Foo/h1
   h2Bar/h2
  /header
shouldn't, IMHO, result in ugly borders that everyone has to nuke 
(compare with img border=0).


Yeah, that's a good point. I've left it at just display:block.



Four-and-a-bit years on I tend to agree :)


Re: [whatwg] Spellchecking mark III

2009-01-21 Thread James Graham

Mikko Rantalainen wrote:

My second sentence was trying to argument that page author has no
business forcing the spellchecking on if the page author cannot force
the spellchecking language! Especially for a case where the page
contains a mix of multiple languages.


Not really. Consider e.g. flickr in which photos may be given titles, 
descriptions and comments in the language of the user's choice but the 
site UI is not localised. If flickr decided to do input type=text 
lang=en to get spellchecking to turn for photo titles then that would 
be much worse for the large number of non-native English speakers than 
input type=text spellcheck=on which would likely use the user's 
preferred dictionary (although this would be UA-dependent of course).


For another example, consider the case where I post on a Swedish forum 
in English, knowing that the general level of English in Sweden is 
excellent and in any case better than the level of my Swedish.


It doesn't seem reasonable to expect sites to always be localised or for 
 sites accepting multilingual user generated content to not exist. 
Therefore it seems totally conterproductive from the point of view of 
people communicating in less dominant languages to require spellchecking 
to be tied to language.




[whatwg] Name for WHATWG Members

2009-01-17 Thread James Graham
There seems to be some confusion about whether members of WHATWG are 
just people on the mailing list or are people on the oversight 
committee. Since it is almost never necessary to discuss the oversight 
committee I suggest it is worth using the common term members to mean 
people on the mailing list and the longer term oversight committee 
members to mean people in the oversight committee. This would eliminate 
the confusion and draw attention to the fact that it is the people on 
the mailing list who are responsible for the technical content of the 
spec (like W3C Working group members) and that the oversight committee 
who are responsible only for things like the charter (like W3C staff).


Re: [whatwg] Fuzzbot (Firefox RDFa semantics processor)

2009-01-13 Thread James Graham

Giovanni Gentili wrote:


Why we must restrict the use case to a single vocabulary
or analyze all the possibile vocabularies?

I think it's be better to generalize the problem
and find a unique solution for human/machine.


The issue when trying to abstract problems is that you can end up doing 
architecture astronautics; you concentrate on making generic ways to 
build solutions to weakly constrained problems without any attention to 
the details of those problems that make them unique. The solutions that 
are so produced often have the theoretical capacity to solve broad 
classes of problem, but are often found to be poor at solving any 
specific individual problem.


By looking at actual use cases we can hope to retain enough detail in 
the requirements that we satisfy at least some use cases well, rather 
than wasting out time building huge follies that serve no practical 
purpose to anyone.


  1   2   3   >