Re: [whatwg] HTML tags.Panorama, Photo Sphere, Surround shots

2014-11-19 Thread David Dailey


Sent: Monday, November 17, 2014 1:10 PM
Tab Atkins Jr. wrote:
To: Biju
Cc: whatwg
Subject: Re: [whatwg] HTML tags.Panorama, Photo Sphere, Surround shots

On Sun, Nov 16, 2014 at 4:38 PM, Biju bijumaill...@gmail.com wrote:
 New cameras/phone cameras comes with Panorama, Photo Sphere, Surround 
 shot options. But there is no standard way to display the image on a 
 webpage. Can WHATWG standardize it and provide HTML tags.


 Photo Sphere https://www.google.com/maps/about/contribute/photosphere/
 Surround shot http://www.samsung.com/us/support/faq/FAQ00057110/74008

These are just alternate image formats, yes?  In that case, browsers can expand 
their img support to allow pointing to these kinds of files.  They'd need 
some sort of native controls on the img element, I suppose.

---
I remember raising a similar issue with W3C/WhatWG/HTML5 WG circa 2007. At the 
time I seem to recall language in the spec that said img formats (and perhaps 
by extension) image formats in SVG were intrinsically rectangular. Perhaps 
I'm imagining though.

Cheers
D




Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread James Graham
On 18/11/14 23:14, Sam Ruby wrote:
 Note: I appear to have direct update access to urltestdata.txt, but I
 would appreciate a review before I make any updates.

FYI all changes to web-platform-tests* are expected to be via GH pull
request with an associated code review, conducted by someone other than
the author of the change, either in GitHub or at some other public
location (e.g. critic, a bug in bugzilla, etc.) (c.f. [1])

* With a few exceptions that are not relevant to the current case e.g.
bumping the version of submodules.

[1] http://testthewebforward.org/docs/review-process.html



Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread Sam Ruby

On 11/18/2014 06:37 PM, Domenic Denicola wrote:

Really exciting stuff :D. I love specs that have reference implementations and 
strong test suites and am hopeful that as URL gets fixes and updates that these 
stay in sync. E.g. normal software development practices of not changing 
anything without a test, and so on.


Thanks!  I've tried to follow the example that the streams spec is 
providing.  Including the naming of directories.



From: whatwg [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of Sam Ruby


https://url.spec.whatwg.org/interop/urltest-results/


I'd be interested in a view that only contains refimpl, ie, safari, firefox, 
and chrome, so we could compare the URL Standard with living browsers.


Done, sort-of: https://url.spec.whatwg.org/interop/browser-results/

I note that given the small amount of data, the 'agents with 
differences' column is less useful than it could be.  Basically, a 
reddish color should be interpreted to mean that we don't have three out 
of the four browsers agreeing on all values.



I'd like to suggest that the following test be added:

https://github.com/rubys/url/blob/peg.js/reference-implementation/test/moretestdata.txt

And that the expected results be changed on the following tests:

https://github.com/rubys/url/blob/peg.js/reference-implementation/test/patchtestdata.txt

Note: I appear to have direct update access to urltestdata.txt, but I would 
appreciate a review before I make any updates.


A pull request with a nice diff would be easy to review, I think?


Done.  https://github.com/w3c/web-platform-tests/pull/1402


The setters also have unit tests:

https://github.com/rubys/url/blob/peg.js/reference-implementation/test/urlsettest.js


So good!

For streams I am running the unit tests against my reference implementation on 
every commit (via Travis). Might be worth setting up something similar.


That's first on my todo list post merge:

http://intertwingly.net/projects/pegurl/url.html#postmerge

Basically, I'd rather do that on the whatwg branch rather than the rubys 
branch, but my stuff isn't quite ready to merge.



As a final note, the reference implementation has a list of known differences 
from the published standard:

intertwingly.net/projects/pegurl/url.html


Hmm, so this isn't really a reference implementation of the published standard 
then? Indeed looking at the code it seems to not follow the algorithms in the 
spec at all :(. That's a bit unfortunate if the goal is to test that the spec 
is accurate.

I guess 
https://github.com/rubys/url/tree/peg.js/reference-implementation#historical-notes
 explains that. Hmm. In that case, I'm unclear in what sense this is a 
reference implementation, instead of an alternate algorithm.


I answered that separately: 
http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Nov/0129.html


- Sam Ruby


Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread Domenic Denicola
From: Sam Ruby [mailto:ru...@intertwingly.net] 

 Done, sort-of: https://url.spec.whatwg.org/interop/browser-results/

Excellent, this is a great subset to have.

I am curious what it means when testdata is in the user agents with 
differences column. Isn't testdata the base against which the user agents are 
compared?

 Done.  https://github.com/w3c/web-platform-tests/pull/1402

Interesting, I did not realize that testdata was part of web-platform-tests 
instead of the URL repo alongside all your other interop material. I wonder if 
we should investigate ways to centralize inside the URL repo, e.g. having 
whatwg/url be a submodule of w3c/web-platform-tests?



Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread Sam Ruby

On 11/19/2014 09:32 AM, Domenic Denicola wrote:

From: Sam Ruby [mailto:ru...@intertwingly.net]


Done, sort-of: https://url.spec.whatwg.org/interop/browser-results/


Excellent, this is a great subset to have.

I am curious what it means when testdata is in the user agents with 
differences column. Isn't testdata the base against which the user agents are compared?


These results compare user agents against each other.  The testdata is 
provided for reference.


I am not of the opinion that the testdata should be treated as anything 
other than as a proposal at this point.  Or to put it another way, if 
browser behavior is converging to something other than what than what 
the spec says, then perhaps it is the spec that should change.



Done.  https://github.com/w3c/web-platform-tests/pull/1402


Interesting, I did not realize that testdata was part of web-platform-tests 
instead of the URL repo alongside all your other interop material. I wonder if 
we should investigate ways to centralize inside the URL repo, e.g. having 
whatwg/url be a submodule of w3c/web-platform-tests?


web-platform-tests is huge.  I only need a small piece.  So for now, I'm 
making do with a wget in my Makefile, and two patch files which cover 
material that hasn't yet made it upstream.


- Sam Ruby



Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread Domenic Denicola
From: Sam Ruby [mailto:ru...@intertwingly.net] 

 These results compare user agents against each other.  The testdata is 
 provided for reference.

Then why is testdata listed as a user agent?

 I am not of the opinion that the testdata should be treated as anything other 
 than as a proposal at this point.  Or to put it another way, if browser 
 behavior is converging to something other than what than what the spec says, 
 then perhaps it is the spec that should change.

Sure. But I was hoping to see a list of user agents that differed from the test 
data, so we could target the problematic cases. As is I'm not sure how to 
interpret a row that reads user agents with differences: testdata chrome 
firefox ie versus one that reads user agents with differences: ie safari.

 web-platform-tests is huge.  I only need a small piece.  So for now, I'm 
 making do with a wget in my Makefile, and two patch files which cover 
 material that hasn't yet made it upstream.

Right, I was suggesting the other way around: hosting the 
evolving-along-with-the-standard testdata.txt inside whatwg/url, and letting 
web-platform-tests pull that in (with e.g. a submodule).


Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread James Graham
On 19/11/14 14:55, Domenic Denicola wrote:

 web-platform-tests is huge.  I only need a small piece.  So for
 now, I'm making do with a wget in my Makefile, and two patch
 files which cover material that hasn't yet made it upstream.
 
 Right, I was suggesting the other way around: hosting the
 evolving-along-with-the-standard testdata.txt inside whatwg/url, and
 letting web-platform-tests pull that in (with e.g. a submodule).
 

That sounds like unnecessary complexity to me. It means that random
third party contributers need to know which repository to submit changes
to if they edit the urld testata file. It also means that we have to
recreate all the infrastructure we've created around web-platform-tests
for the URL repo.

Centralization of the test repository has been a big component of making
contributing to testing easier, and I would be very reluctant to
special-case URL here.


Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread Domenic Denicola
From: whatwg [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of James Graham

 That sounds like unnecessary complexity to me. It means that random third 
 party contributers need to know which repository to submit changes to if they 
 edit the urld testata file. It also means that we have to recreate all the 
 infrastructure we've created around web-platform-tests for the URL repo.

 Centralization of the test repository has been a big component of making 
 contributing to testing easier, and I would be very reluctant to special-case 
 URL here.

Hmm. I see your point, but it conflicts with what I consider a best practice of 
having the test code and spec code (and reference implementation code) in the 
same repo so that they co-evolve at the exact same pace. Otherwise you have to 
land multi-sided patches to keep them in sync, which inevitably results in the 
tests falling behind. And worse, it discourages the practice of not making any 
spec changes without any accompanying test changes.

That's why for streams the tests live in the repo, and are run against the 
reference implementation every commit, and every change to the spec is 
accompanied by changes to the reference implementation and the tests. I 
couldn't imagine being able to maintain that workflow if the tests lived in 
another repo.



Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread Sam Ruby

On 11/19/2014 09:55 AM, Domenic Denicola wrote:

From: Sam Ruby [mailto:ru...@intertwingly.net]


These results compare user agents against each other.  The testdata
is provided for reference.


Then why is testdata listed as a user agent?


It clearly is mislabled.  Pull requests welcome.  :-)


I am not of the opinion that the testdata should be treated as
anything other than as a proposal at this point.  Or to put it
another way, if browser behavior is converging to something other
than what than what the spec says, then perhaps it is the spec that
should change.


Sure. But I was hoping to see a list of user agents that differed
from the test data, so we could target the problematic cases. As is
I'm not sure how to interpret a row that reads user agents with
differences: testdata chrome firefox ie versus one that reads user
agents with differences: ie safari.


I guess I didn't make the point clearly before.  This is not a waterfall 
process where somebody writes down a spec and expects implementations to 
eventually catch up.  That line of thinking sometimes leads to browsers 
closing issues as WONTFIX.  For example:


https://code.google.com/p/chromium/issues/detail?id=257354

Instead I hope that the spec is open to change (and, actually, the list 
of open bug reports is clear evidence that this is the case), and that 
implies that differing from the spec isn't isomorphically equal to 
problematic case.  More precisely: it may be the spec that needs to 
change.



web-platform-tests is huge.  I only need a small piece.  So for
now, I'm making do with a wget in my Makefile, and two patch
files which cover material that hasn't yet made it upstream.


Right, I was suggesting the other way around: hosting the
evolving-along-with-the-standard testdata.txt inside whatwg/url, and
letting web-platform-tests pull that in (with e.g. a submodule).


Works for me :-)

That being said, there seems to be a highly evolved review process for 
test data, and on the face of it, that seems to be something worth 
keeping.  Unless there is evidence that it is broken, I'd be inclined to 
keep it as it is.


In fact, once I have refactored the test data from the javascript code 
in my setter tests, I'll likely suggest that it be added to 
web-platform-tests.


- Sam Ruby


Re: [whatwg] URL interop status and reference implementation demos

2014-11-19 Thread James Graham
On 19/11/14 16:02, Domenic Denicola wrote:
 From: whatwg [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of
 James Graham
 
 That sounds like unnecessary complexity to me. It means that random
 third party contributers need to know which repository to submit
 changes to if they edit the urld testata file. It also means that
 we have to recreate all the infrastructure we've created around
 web-platform-tests for the URL repo.
 
 Centralization of the test repository has been a big component of
 making contributing to testing easier, and I would be very
 reluctant to special-case URL here.
 
 Hmm. I see your point, but it conflicts with what I consider a best
 practice of having the test code and spec code (and reference
 implementation code) in the same repo so that they co-evolve at the
 exact same pace. Otherwise you have to land multi-sided patches to
 keep them in sync, which inevitably results in the tests falling
 behind. And worse, it discourages the practice of not making any spec
 changes without any accompanying test changes.

In practice very few spec authors actually do that, for various reasons
(limited bandwidth, limited expertise, limited interest in testing,
etc.). Even when they do, designing the system around the needs of spec
authors doesn't work well for the whole lifecycle of the technology;
once the spec is being implemented and shipped it is likely that those
authors will have moved on to spend most of their time on other things,
so won't want to be the ones writing new tests for last year's spec.
However implementation and usage experience will reveal bugs and suggest
areas that require additional testing. These tests will be written
either by people at browser vendors or by random web authors who
experience interop difficulties.

It is one of my goals to make sure that browser vendors — in particular
Mozilla — not only run web-platform-tests but also write tests that end
up upstream. Therefore I am very wary of adding additional complexity to
the contribution process. Making each spec directory a submodule would
certainly do that. Making some spec directories, but not others, into
submodules would be even worse.

 That's why for streams the tests live in the repo, and are run
 against the reference implementation every commit, and every change
 to the spec is accompanied by changes to the reference implementation
 and the tests. I couldn't imagine being able to maintain that
 workflow if the tests lived in another repo.
 

Well you could do it of course for example by using wpt as a submodule
of that repository or by periodically syncing the test files to wpt.

As it is those tests appear to be written in a way that makes them
incompatible with web-platform-tests and useless for testing browsers.
If that's true, it doesn't really support the idea that we should
structure our repositories to prioritise the contributions of spec
authors over those of other parties.