[whatwg] The truth about Nokias claims

2007-12-11 Thread Shannon

This is an except from an MPEG-LA press release:

"Owners of patents or patent applications determined by MPEG LA’s patent 
experts to be essential to the H.264/AVC standard (“standard”) include 
Columbia University, Electronics and Telecommunications Research 
Institute of Korea (ETRI), France Télécom, Fujitsu, IBM, Matsushita, 
Mitsubishi, **Microsoft**, Motorola, **Nokia**, Philips, Polycom, Robert 
Bosch GmbH, Samsung, Sharp, Sony, Thomson, Toshiba, and Victor Company 
of Japan (JVC)."


So lets review the three companies loudly objecting to OGG, 
misrepresenting its status and continuing to fuel this debate:


Apple: Has heavy investment in H.264, AAC and DRM via iTunes. Known for 
proprietry hardware lock-in.
Microsoft: Heavy investment in WMV and DRM. 'Essential patent holder' in 
H.264. Major shareholder in Apple. Known for proprietry browser and OS 
lock-in and standards disruption.
Nokia: 'Essential patent holder' and heavy invester in H.264. Argued for 
software patents in EU.


Stop believing their lies! Don't you think it's weird that Nokia is 
complaining about patents while simultaneous holding numerous video 
related ones? OGG/Vorbis/Theora are open and as safe as codecs can get. 
Its patent risks are practically non-existent. It has no licensing fees. 
It is easy to implement across all major (and most minor) platforms. It 
is the format of choice - unless you're Nokia, Apple or Microsoft.


Finally, nobody has mentioned that the licensing terms on H.264/AVC 
state that in about 8 years from now ALL internet H.264 content and 
software becomes licensable. Sites will have to pay to use it. It is NOT 
FREE, just 'on hold' until adoption becomes widespread and enforcement 
more practical. When that happens guess who makes billions? Nokia and 
Microsoft.


These companies have no right to be distrupting this list and modifying 
the standard to their whims. Their business interests are of no interest 
here. This is a PUBLIC standard, not a proprietry one.


Put the OGG reference back in the HTML5 draft, exactly as it was, as it 
was originally agreed, as many have requested - AS IS APPROPRIATE!


Shannon
[EMAIL PROTECTED]


Re: [whatwg] persistent storage changes

2007-12-11 Thread Shannon
For what it's worth the changes to persistent storage have my vote. As a 
web author and user it strikes the right balance between functionality 
and privacy. Just one thing though, since this storage could also be 
used for 'offline applications' should some mention be made regarding 
access from a page hosted at 127.0.0.1 or via the file: protocol? Also I 
was curious about what Firefox was putting in mine but it looks like I 
need a 3rd-party sqlite app to do it. Should the spec recommend 
user-agents provide a direct method of access (even though some data may 
be base64 or obsfucated)?


Shannon
[EMAIL PROTECTED]


Ian Hickson wrote:
I just checked in a change to make globalStorage far simpler -- I dropped 
all the domain manipulation stuff, and made it same-origin instead. I also 
dropped StorageItem and just made the Storage stuff return strings.


Re: [whatwg] several messages regarding Ogg in HTML5

2007-12-12 Thread Shannon
Ian, are you saying that not implementing a SHOULD statement in the spec 
would make a browser non-compliant with HTML5?
Are you saying that if a vendor does not implement the OPTIONAL Ogg 
support then they would not use HTML5 at all?


I'm not being sarcastic here. I'd actually like you to answer these 
points to understand your position on the SHOULD statement.


I commend you for trying to support all views but you yourself have 
indicated that the Ogg vs. H.264 parties cannot agree - and in the 
absence of an improbable event (the spontaneous appearance of an 
unencumbered, court tested, high-performance, non-proprietary video 
codec) never will. Even if Ogg were court-tested Nokia and Microsoft 
will never change their position while remaining in the MPEG-LA 
consortium. The only other option then is inaction (your apparent 
solution) - which we ALL agree will hand the win to Macromedia (97% 
Flash market share).


One of these parties must get their way, and currently the majority of 
voiced opinion here is that we SHOULD recommend Ogg (as in SHOULD not MUST).


As others have said, if Apple and Nokia (the minority of respondents) do 
want to implement Ogg then there appears to me to be no requirement for 
them to do so while retaining compliance. There is nothing I see that 
prevents  being used with other formats. Surely this will not 
destroy the  element, it will simply require Safari, IE and Nokia 
users to download a plugin for some sites (which open-source groups will 
be happy to provide) or use an Ogg compatible browser or 3rd-party app.


There is no logical reason we should not *recommend* Ogg while no better 
options remain. It isn't perfect but it is the best nonetheless. If 
nothing else it will give the public (this is still a public spec isn't 
it?) a baseline format for the publishing of non-profit materials that 
can be decoded by all Internet users (yes, even those on Mac) without 
restriction. Submarine patents are irrelevent here as we all agree there 
there is no viable solution for that and there isn't likely to be within 
the useful lifetime of this specification.


As it stands, right now, h.264 is patent-locked, VC1 is patent-locked, 
Flash is patent-locked, h.261 is too slow, Dirac isn't ready. Ogg is 
reasonably fast, well tested, well support and NOT patent-locked until 
somebody proves otherwise. It is not unreasonable to tell browsers they 
SHOULD support it, even though we know some won't.


Apple; How can we make you happy without committing to future h.264 
royalties? More specifically, what other royalty-free, non-patented, 
drm-supporting codec would you prefer?
Microsoft/Nokia; Are you even going to support HTML5, when you seem so 
keen on making your own standards? When have you EVER been fully 
compliant with a public spec?
Ian; Why do you think we are angry with you? What will it take to get 
this (apparently unilateral) change revoked? Finally, what is 
Google/YouTube's official position on this?


I know that's a lot of questions but I feel they SHOULD be answered 
rather than simply attacking the Ogg format.


Shannon




Re: [whatwg] The truth about Nokias claims

2007-12-13 Thread Shannon
Arguing the definition of "proprietary" and "standards" is irrelevant. 
Neither has any bearing on the problem which is that in 2010 the MPEG-LA 
(of which Nokia is a member) will impose fees on all use of h.264 on the 
Internet equivalent to those of 'free television'. As near as I can tell 
that will mean all websites serving h.264 content will be liable for 
fees of between $2,500 - $10,000 USD per annum. This makes it 
inappropriate for any public standard and makes other technical and 
legal comparisons between Ogg and h.264 irrelevant. x264 is a nice 
program but it is doubtful it is exempt from these fees nor is the 
content it produces or the websites that host them.


The ONLY issue here is about the inclusion of Ogg as a SUGGESTION (not 
requirement) and the ONLY argument against the format is that it *might* 
be subject to submarine patents - however since this applies to EVERY 
video codec and even HTML5 itself  it is also irrelevant.


Shannon




Re: [whatwg] The truth about Nokias claims

2007-12-13 Thread Shannon

Ian Hickson wrote:
As far as I can tell, there are no satisfactory codecs today. If we are to 
make progress, we need to change the landscape. There are various ways to 
do this, for example:


 * Make significant quantities of compelling content available using one 
   of the royalty-free codecs, so that the large companies have a reason 
   to take on the risk of supporting it.
  

* Put the chicken before the egg.
 * Convince one of the largest companies to distribute a royalty-free 
   codec, taking on the unknown liability, and make this widely known, to 
   attract patent trolls.
  

* Wait till cows fly.
 * Negotiate with the patent holders of a non-royalty-free codec to find a 
   way that their codec can be used royalty-free.
  

* Wait till the sky turns green.
 * Change the patent system in the various countries that are affected by 
   the patent trolling issue. (It's not just the US.)
  

* Wait till hell freezes over.

Your suggestions are impractical and you are smart enough to know that. 
You claim neutrality but YOU removed the Ogg recommendation and you 
haven't answered the IMPORTANT questions. I'll re-state:


1.) Does not implementing a SHOULD recommendation make a browser 
non-complaint (as far as validation goes)?
2.) What companies (if any) would abandon HTML5 based on a SHOULD 
recommendation?
3.) What is Google/Youtubes' official position (as the largest internet 
video provider)? I assume they are reading this list and I'm guessing 
you still work for them.
4.) What prevents a third party plugin open-source from providing Ogg 
support on Safari and Nokia browsers?
5.) Why are we waiting for ALL parties to agree when we all know they 
won't? Why can't the majority have their way in the absence of 100% 
agreement?
6.) How much compelling content is required before the draft is 
reverted. Does Wikipeadia count as compelling?



Answering these questions is the way forward, not back-and-forthing over 
legal issues.



Ian Hickson wrote:

On Fri, 14 Dec 2007, Shannon wrote:
  
Arguing the definition of "proprietary" and "standards" is irrelevant. 
Neither has any bearing on the problem which is that in 2010 the MPEG-LA 
(of which Nokia is a member) will impose fees on all use of h.264 on the 
Internet equivalent to those of 'free television'. As near as I can tell 
that will mean all websites serving h.264 content will be liable for 
fees of between $2,500 - $10,000 USD per annum. This makes it 
inappropriate for any public standard and makes other technical and 
legal comparisons between Ogg and h.264 irrelevant. x264 is a nice 
program but it is doubtful it is exempt from these fees nor is the 
content it produces or the websites that host them.



Again, as far as I can tell nobody is actually suggesting requiring H.264. 
I don't think it is productive to really discuss whether H.264 would be a 
possible codec at this time, since it clearly isn't.


  
Nokia certainly seem to be suggesting this, and they helped start this 
debate.
  
The ONLY issue here is about the inclusion of Ogg as a SUGGESTION (not 
requirement) and the ONLY argument against the format is that it *might* 
be subject to submarine patents - however since this applies to EVERY 
video codec and even HTML5 itself it is also irrelevant.



No, the issue is about finding a codec that everyone will implement. To 
that end, Theora is not an option, since we have clear statements from 
multiple vendors that they will not implement Theora.


Again, as I noted in this e-mail:

   http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2007-December/013411.html

I would please like to ask everyone participating in this discussion to 
focus on the future and on how we can move forward.
  
I am focusing on the future. I do not want Flash to become the defacto 
video standard. Inaction is still an action in this case.





Re: [whatwg] The truth about Nokias claims

2007-12-13 Thread Shannon


Ian, as editor, was asked to do this.  It was a reasonable request to 
reflect work in progress.  He did not take unilateral action. 
Ok, not unilateral. How about 'behind closed doors?'. Why no open 
discussion BEFORE the change?


4.) What prevents a third party plugin open-source from providing Ogg 
support on Safari and Nokia browsers?


Nothing, but if the spec. required the support, the browser makers 
cannot claim conformance.
The spec did not REQUIRE the support, it recommended it. You've now 
confirmed that doesn't cause non-compliance.
5.) Why are we waiting for ALL parties to agree when we all know they 
won't? Why can't the majority have their way in the absence of 100% 
agreement?


Because we have the time to try to find a solution everyone can get 
behind.  It's not as if we are holding final approval of HTML5 on this 
issue.  There is plenty of technical work to do (even on the video and 
audio tags) while we try to find the best solution.
Actually it appears that somebody was holding HTML5 to ransom on this 
issue. Now you've caved in. Now the rest of us are holding it. Touche?
Anyway, this is a great idea except 100% agreement is practically 
impossible. Are you claiming otherwise?
We don't need a vote. 
That just about says it all, doesn't it? Is this a public standard or 
not? What is this list?
6.) How much compelling content is required before the draft is 
reverted. Does Wikipeadia count as compelling?


When will I stop beating my wife?  Your question has a false 
assumption in it, that we are waiting for compelling content in order 
to revert the draft. We're not.  We're working on understanding.
Then what ARE you waiting on (what PRACTICAL thing I mean)? 
Understanding what? My understanding is perfect. The MPEG-LA is upset 
with the Ogg proposal.
Also, when will you stop beating your wife? (since you brought it up). 
Ian has claimed compelling content could end this impasse. I do not 
believe it (any more than I believe you beat your wife).


As Ian has said, we are going in circles on this list, with much heat 
and very little if any new light.  Can we stop?  It is getting quite 
tedious to hear see the same strawmen bashed on again and again.


Of course people would like this to end. Some more than others. It would 
help if those involved in creating the controversial changes would come 
up with a practical solution but they can't. If would help even more if 
you agreed to restore the text.


And who is blocking the light? Who created the strawmen? Was it Nokia 
claiming that Ogg was 'proprietary' (conveniently ignoring the fact it 
is public-domain). You claim 100% agreement is necessary to revoke the 
change. If so why wasn't it necessary BEFORE the change?


The way I see it we are expected to wait for an impossible event. No new 
protocol can possibly pass the expired patents test for at least 10 
years. Are you planning to wait that long to ratify HTML5?


I am NOT the villain here. MY interest in this matter is altruistic (as 
a web developer and user). I do not work for a company with existing 
commitments to a particular format. If I keep the debate going then it's 
because the answers given are unsatisfactory. I apologise to the rest of 
the list but this is an much more serious issue than the format of the 
 tag to me.


Shannon



Re: [whatwg] The truth about Nokias claims

2007-12-14 Thread Shannon

Stijn Peeters wrote:

It does not hold any consequences for the final spec.
  
Of course it does, or Nokia would not have taken issue with it. When 
this comes up in the future somebody will claim 'we've been over that' 
when the issue could have been resolved now. Putting this on hold 
changes nothing except to stifle debate. What's worse is that all the 
arguments made now will have to be repeated.

I do not understand why someone would be holding HTML5 ransom over this.
  
Because they have patents and existing investment in other formats. Are 
they denying that? No. Are they obsfucating that? Yes.

HTML5 is more than . According to the road map the final HTML5
recommendation is due in late 2010.
This is an argument for AND against changing the text. Therefore not an 
argument at all. I would say that the fact h.264 fees become due in 2010 
would be a case for discussing this now.

 There is still plenty of time to discuss
the issue and come to a reasonable solution, and while you might find
 more important than ,  is also something to be
discussed.
  
I didn't say it wasn't. What I said was the volume of traffic on the 
 element is proportional to its importance and therefore not a 
reason to shut down the debate.


Shannon


Re: [whatwg] The truth about Nokias claims

2007-12-14 Thread Shannon
I've been misquoted on this list several times now so I want to make my 
position clear. I believe that the current draft, which was changed 
without open discussion, gives a green light for the status quo. The 
status quo is that Flash, Quicktime and WMV will remain the 'standards' 
for web video. I know this because I implement video on websites as a 
web developer.


What concerns me is that the removed OGG recommendation (specified as 
SHOULD rather than MUST) was a step forward to the adoption (however 
reluctantly) by corporations and governments of a set of formats that 
require no royalties to encode, decode, reverse-engineer or distribute. 
None of the status quo formats can make that claim.


Several people on this list have claimed that recommending OGG would 
have legal implications for vendors. It does not.  Those who feel 
threatened have the option to not implement it - without affecting 
compliance. In nearly all cases  the end-user would have been subject to 
the minor inconvenience of finding an alternate source of OGG support. 
What concerns me most is that the people making contrary claims know 
this yet argue anyway. Their motives and affilations, to me, are suspect.


OGG Theora is not the most compressed video format, nor is it the least. 
It is however in the public domain and at least equivalent to MP3 and 
XVID, which are both popular streaming formats. While submarine patents 
may one day undermine this there is no current evidence that OGG 
contains patented technology and there is plenty of legal opinion that 
it does not. Either way it is not possible to remove this risk 
altogether by maintaining the status quo or recommending (or demanding) 
any other format.


Supporting OGG now in no way prevents a better option (such as Matroska 
and/or Dirac) being added in the future. Nor does it prevent SHOULD 
being changed to MUST.


The loudest objectors to OGG are also, in my opinion, the most 
encumbered by commercial support for an existing licensed format. This 
is not paranoia or fandom, just observation of this list.


There is no evidence that recommending  optional OGG support will affect 
companies adoption (or not) of the rest of the HTML5 standard. We only 
have Ian's word for that and I don't believe it anyway. HTML5 is likely 
to be adopted by all major browsers (in full or in part).


MPEG2/3/4 are NOT unencumbered formats. There are too many patent 
holders for that to ever be the case. None will give up their rights 
while they remain defacto standards.


Some claim that recommending no baseline format is neutral ground. The 
amount of outrage this triggered proves that is false. The claim that we 
have not reached a decision is true (my opponents use this claim to 
support their 'neutrality'). Yet it is clear to me that NOT setting a 
standard is as influential in this case as setting one. Indecision with 
no reasonable grounds for ending it leads to the status quo as I have 
said. Is it not the purpose (and within the powers of) of a standards 
body to steer the status quo? Is it not in the public interest that this 
happens?


HTML4 advocated GIF, JPG and PNG even if the wording made it seem 
optional. The result was full support for 2 of these formats and partial 
support of the third. There is no reason to believe that putting a 
SHOULD recommendation in the text wouldn't result in most browsers 
supporting OGG (except IE). This in turn would give public, non-profit 
and non-aligned (with MPEG-LA) organizations justification to release 
materials in this format rather than Flash, WMV or MOV (all of which 
require commercial plugins and restrictive licenses).


Some claim pro-OGG supporters started this debate. It was Nokia who made 
this a headline issue.


Objectors claim they are working towards a resolution that defines a 
MUST video format and is accepted by 'all parties'. I don't believe that 
because they know this is impossible and it WILL affect HTML5 adoption. 
There is no format that can satisfy their unreasonable expectations. 
There never will be. We live in a world where companies claim patents on 
'double-clicking' and 'moving pictures on a screen'. How then can any 
format ever meet their demands?


I hope I have made my position clear. I hope my position represents the 
public interest. I am not here just to nag (I have been on this list for 
over two years and have only intervened once before). I am writing in 
the hope that proper discussion takes place and that future decisions of 
this magnitude are not made without public consultation - in the 
interests of entrenched cabals. I would like to say I believe all those 
opposing OGG have our best interests at heart - but that would be a lie. 
I am too old to believe companies and their spokespeople are altruistic 
(sorry Dave).


Shannon




Re: [whatwg] The truth about Nokias claims

2007-12-14 Thread Shannon

Stijn Peeters wrote:

As I said, a SHOULD requirement in the specification which will (given the
current status quo) not be followed by the major(ity of) browser vendors is
useless and should be improved so it is a recommendation which at least can
be implemented. Changing the SHOULD to MUST means that a lot of browser
vendors would not be able to develop a conforming implementation.
Governments do generally not build browsers or HTML parsers so an HTML
specification would likely not influence them much, and I believe they are
not who such a specification is aimed at.
  


This is a tired argument already debunked. The browsers that won't 
support OGG support plugins (and still remain HTML5 compliant). The 
recommendation will push other browsers (of which there are many) 
towards a common ground.



As stated before, it did not advocate them, merely stated them as *examples*
of image formats. Your claim that HTML4 played a substantial role in
adoption of GIF and JPEG is interesting. Do you have any sources for that?
  

Yes.
(http://www.houseofmabel.com/programs/html3/docs/img.html). I quote:

As "progress" increases the number of graphics types I've been asked to 
support in /HTML3/, many people are unsure as to exactly what formats 
are supported so perhaps a list is in order:


   * GIF (&695, "GIF")
   * PNG (&B60, "PNG")
   * JPEG (&C85, "JPEG")
   * Sprite (&FF9, "Sprite")
   * BMP (&69C, "BMP")
   * SWF (&188, "Flash")
   * WBMP (&F8F, "WBMP")

---
So which of the above became defacto web standards under HTML4? And 
there were a LOT more image formats out there. Not proof, but certainly 
evidence the spec helped narrow down the list. Even though it was 
neither a SHOULD or MUST specification they were mentioned and it seems 
to me that counts for something. So did the fact the formats in question 
were believed to be public-domain. However, I acknowledge the 
speculative nature of this as I acknowledge the speculative nature of 
your other claims (like browser manufactures not supporting OGG when the 
spec becomes final).


Shannon



Re: [whatwg] The truth about Nokias claims

2007-12-14 Thread Shannon
s not a neutral, wise or 
logical position. If you want the spec to reflect current reality then just rebadge the HTML4 spec. Going forwards means 
making changes, not stating the obvious or maintaining the status quo based on Nokia's whims.


Shannon


Re: [whatwg] The truth about Nokias claims

2007-12-14 Thread Shannon

Stijn Peeters wrote:

As I said, a SHOULD requirement in the specification which will (given the
current status quo) not be followed by the major(ity of) browser vendors is
useless and should be improved so it is a recommendation which at least can
be implemented. Changing the SHOULD to MUST means that a lot of browser
vendors would not be able to develop a conforming implementation.
Governments do generally not build browsers or HTML parsers so an HTML
specification would likely not influence them much, and I believe they are
not who such a specification is aimed at.

A lot has been said about the meaning of 'should'. You are not the first 
to claim 'should' is only meaningful if vendors implement it. If this is 
the case why not replace ALL references to 'should'  with 'must'?


Rhetorical question. The reason for 'should' in a standard (or draft) is 
that it reflects what we (the public, the developers and the majority) 
want but believe some vendors won't or can't  implement. It's an opt-out 
clause. According to OpenOffice it appears 329 times in the current 
draft. Hardly a useless word! All that is being discussed here is the 
desire to tell vendors they 'should' implement OGG. Apparently Nokia and 
Apple don't feel that way but are not happy to simply opt-out - they 
want EVERYBODY to opt-out. If we replaced all shoulds with musts this 
standard would never go anywhere and if we deleted all shoulds then we'd 
have even more divergence.


What really matters is that where the pros and cons balance a neutral 
vendor (one that hasn't already committed exclusively to a proprietary 
format) might be persuaded to implement a 'should' recommendation. This 
is exactly what we had before this change, and for a good reason. I have 
yet to hear a neutral vendor oppose the OGG recommendation and I would 
be saddened if they they did.


Also format wars are won by content, not encoders. Governments and 
non-profit organizations do produce content. Formats gain some advantage 
through standards support (even 'should' recommendations).


You and Dave have both accused me of 'bashing'. I think a more 
appropriate (and less violent) word would be 'pointing'. I'm pointing 
out how self-serving Apple and Nokia are. PR-wise they are, in effect, 
'bashing' themselves. Not my problem. Good luck to them and their 
entrenched monopolies right? It's their 'right' as a corporation to 
wreck standards for the benefit of their shareholders? They sound very 
reasonable, until you realise that one way or another the public will be 
paying for it.


Shannon


Re: [whatwg] The truth about Nokias claims

2007-12-14 Thread Shannon


Please look back on the mailing list archives. There's been plenty of 
discussion about this before, and it's always ended up in the same 
loop: A group of people wanting nothing but Ogg/Theora/Vorbis, and 
another wanting one standard that all major implementers will support.
I did, and which of these approaches was finally accepted by the 
majority (and the editor)?


a.) *Suggest* a format that is reasonably considered the only playable 
streaming format in the public domain?
b.) *Insist* on a format that is reasonably considered the only playable 
streaming format in the public domain, but will cause all non-supporting 
browsers to fail compliance?
c.) *Suggest or Insist* on a format that is obviously NOT in the public 
domain?
d.) *Suggest or Insist* on a mysterious unnamed format that doesn't 
exist, and never will?

e.) *Ignore* it and hope it goes away?

The majority wanted a.) or b.). However b.) through d.) are impractical 
(politically or technically) and e.) didn't work in HTML4 so a.) was 
added to the draft as being an acceptable, if not perfect compromise.


d.) and e.) fuel speculation of a mysterious solution that nobody can 
guarantee or even name! I'd even help develop it if I knew what is was.


The group insisting on b.) through e.) are either stalling or hopeless 
optimists. You can't invent a codec tommorow and expect it to be safe 
from patents and adopted by all vendors (especially when some of those 
vendors are also video patent holders). You can't extract an agreement 
from existing codec owners to free their rights while they stand a 
chance of cashing in. THESE THINGS ARE IMPOSSIBLE! THEY ARE NOT OPTIONS!


I would LOVE a baseline format specified as a MUST that is both 
'real-time' and 'unencumbered' (option b.) but I KNOW that won't happen 
within the timeframe of this standard. You know that won't happen. We 
all know it won't happen.


A 'should' recommendation for Ogg was chosen because it was the most 
popular, reasonable and realistic option. It was accepted (even 
temporarily), the issue was put to sleep. Then Nokia interfered, now 
we're here. What public discussion took place to revoke this prior 
consensus? Where is that archived? I've been reading this list for 2 
years and the first I heard about the revocation of the original 
preference was AFTER it happened. The w3c discussed this? Fine, I'm 
still waiting for that link and I don't understand why a decision 
apparently made on this list was revoked on (apparently) another. 
(Actually I DO understand, I was simply posing THE question that needs 
to be asked - ie, whose in charge here?)


The chosen wording was acceptable to most but it supported a format that 
wasn't obviously patented by incumbents so the incumbents reversed that 
decision off-list. Save your ire for those who deserve it, I want an 
open standard just like you. Can you say the same about Nokia, Microsoft 
or (gasp,shock,horror) Apple? Can you promise me that those who removed 
the recommendation are REALLY looking for a solution when they may gain 
from a lack of one?


I don't expect a format that ALL browsers vendors will support but I do 
expect that this working group will *recommend* the next best thing. 
Something that open-source software and plugins will handle if the 
vendors refuse. Which right now is Ogg.


Shannon


Re: [whatwg] The truth about Nokias claims

2007-12-14 Thread Shannon


Again, a false presumption.  This was discussed in the context of the 
HTML WG at the W3C.  Those doors are not closed.
  
Really? Does that mean I can claim a seat on the board? Where is this discussion about a public standard made public if 
not here? Please provide a link to these open discussions and I'll concede your point (and join - it is public, and free 
right?)

Ok so I found the other list ([EMAIL PROTECTED]). Nokia state their 
reasons and clearly it was discussed (at Cambridge apparently)  but why 
two lists for one standard?



Shannon


[whatwg] The political and legal status of WHATWG

2007-12-14 Thread Shannon
Ian, thank you for your answers re: video codecs. I agree with you now 
that everything that needs to said has been said regarding the change 
itself and I think most parties have made it clear how they feel and 
what they hope will resolve it.


It's clear the opinions of all parties cannot be reconciled. The current 
text has not reconciled the views, nor did the previous, nor can a 
future one. It doesn't take a genius to figure out that this will not 
end well. I am quite certain the issue at stake here cannot be solved at 
the technical or legal level at all. This is truly a political/financial 
matter. Which brings us to the hard questions at the crux of the matter:


1.) When a browser vendor acts clearly outside of the public interest in 
the making of a public standard should that vendors desires still be 
considered relevant to the specification?
2.) When an issue is divided between a vendor (or group of) and 'the 
public' (or part of), what relative weight can be assigned to each?
3.) When a vendor makes false or misleading statements to guide an 
outcome should there be a form of 'censure' that does not involve a 
public flame war?
4.) If the purpose of the group is to build interoperability should a 
vendor be 'censured' for holding interoperability to ransom without 
sufficient technical or legal grounds?
5.) What methods exists to define a disruptive member and remove them 
from further consideration?
6.) Should a standards body make a ruling even though some members claim 
they won't obey it?

7.) Should a standards body bow to entrenched interests to keep the peace?
8.) Does the WHATWG consider itself to be a formal standards body?
9.) Should HTML5 be put back under direct control of the w3c now that 
they have expressed interest in developing it?
10.) Is is appropriate for members to have discussions outside of this 
list, via IM, IRC or physical means not available or practical to the 
public?
11.) Does the group consider HTML5 to be a 'public standard' or a 
'gentlemen's agreement' between vendors?
12.) Is it even legal for the majority of commercial browsers to form 
any agreement that could (directly or indirectly) result in higher costs 
for end-users? How do you prevent a 'working group' from becoming a cartel?



These are not questions that anybody can easily answer. Some have 
probably been answered in this list but not, at least to my reading of 
it, in the charter, the wiki or the FAQ (none appear legally binding in 
any case). It is possible the lack of clear answers in an obvious place 
may threaten the impartiality and purpose of this group, damage your 
public image and inflame debate. I believe the reason for much of the 
'heat' over the video codec is due to all parties (including 
non-members) coming up with their own answers in the absence of a formal 
position. That and a lot of mistrust regarding members corporate priorities.


I've read the charter but it doesn't define many rules. The w3c has 
rules but my understanding is that WHATWG is not a formal part of w3c 
(even if some members are).


Public acceptance of the standard may not, in practical terms, be as 
important as vendor acceptance (to vendors at least) but since the 
public is, in many ways, doing much of the vendors work for them it 
would beneficial to have a clearer statement of how these contributions 
are weighed. To cite a theoretical example: if some altruistic 
billionairre was to write the 'missing codec' that exceeded h.264 in 
compression, used 64k of ram, ran on a 386 using 50 lines of code and he 
or she paid off all the trolls and indemnified vendors - what actions, 
if any, would WHATWG members take to ensure it was accepted by members 
with vested interests?


If that last theoretical question cannot be answered then what hope have 
we for a baseline video format?


If answers to the above questions exist then please don't just answer 
them here. Anything short of a formal document on the WHATWG can't 
possibly represent the group as a whole and is just going to be raised 
again anyway. In other words the mailling list is not the best place to 
archive these answers (if any are forthcoming).


Shannon


[whatwg] public-html list

2007-12-14 Thread Shannon

Ian Hickson wrote:

On Sat, 15 Dec 2007, Shannon wrote:
  
Ok so I found the other list ([EMAIL PROTECTED]). Nokia state their 
reasons and clearly it was discussed (at Cambridge apparently)  but why 
two lists for one standard?



Historical reasons -- the W3C initially wasn't interested in doing HTML5, 
so we started in the WHATWG (back in 2004), but earlier this year the W3C 
decided to also work on HTML5 and adopted the WHATWG spec, and now the two 
groups work together on the same spec.


  
I imagine this must generate a lot of cross-posting and confusion. Are 
there plans to merge the groups? Which group has authority in the event 
of a dispute or is that unknown?

My spider-senses tell me we're going to end up with 2 HTML5's.

Shannon


Re: [whatwg] The political and legal status of WHATWG

2007-12-14 Thread Shannon



It's clear the opinions of all parties cannot be reconciled.


Of course, but they don't have to be because the requirements for the
solution are clear, and I believe Ian and others have stated them several
times now.


Yes, requirements that CANNOT be met. Ever. Period.

The current placeholder text proposes two main conditions that are 
expected to be met before vendors will 'move forward' and 'progress' 
will happen. It isn't a rule but there is certainly an implication that 
leaves a lot to be desired:


a.) We need a codec that is known to not require per-unit or 
per-distributor licensing, that is compatible with the open source 
development model, that is of sufficient quality as to be usable.

b.) That is not an additional submarine patent risk for large companies.

The first statement is reasonable, however I personally know of only 1 
video codec (Theora) , and 2 audio codecs (Vorbis and FLAC) that meet it.
The second statement, combined with the first is a logical trap (a 
paradox). All vendors who do not *currently* support the chosen format 
will incur 'an additional submarine patent risk'.


Can you see the trap? The ONLY way to meet the second requirement is to 
*currently* meet the first. If all the whatwg members already did that 
then this wouldn't be a issue. Those claiming to want a better codec 
cannot possibly implement it and meet the second requirement. If it 
doesn't exist then how can it NOT be an additional patent risk?


You can't state an IMPOSSIBLE condition as a method for 'moving forward' 
and then expect people to take your claims seriously.


Shannon


Re: [whatwg] The truth about Nokias claims

2007-12-15 Thread Shannon

They are not easy ways forward, I agree.
How would _you_ recommend addressing Apple's requirements while still 
addressing the requirements of the rest of the community?


  
I would recommend that Apple and Nokia follow the example set by 
Goomplayer (and others) by allowing users to download codecs on-demand 
from third-party providers (like Sourceforge). This puts the risk 
squarely in the users court and better yet allows Safari/Quicktime to 
adapt to new codecs in the future. It may be my foggy memory but last I 
checked Quicktime could already do this. If such a time comes that the 
patent risk is resolved they could bundle it then. However, most media 
players are bloated enough without bundling every codec so it's really a 
win for everybody.


If this still wasn't enough then they could join a patent pact with 
other large vendors to provide a mutual defense / shared liability fund. 
If Ogg was under threat they'd probably get the FFII to help them fight 
it pro-bono.



> THESE THINGS ARE IMPOSSIBLE! THEY ARE NOT OPTIONS!



As it says in my .signature -- things that are impossible just take 
longer.
  
Yes that's very cute but it's poor policy. That kind of thinking leads 
kids to buy "Sea Monkeys" and jump off bridges wearing capes. When they 
grow up they lose their savings playing the lottery. It is not 
impossible to hope that the majority of vendors will grudgingly accept 
Ogg (in some form or another). It is impossible to expect anything to 
happen while some of the complainants have clear conflicts of interest 
and the sticking point is 'unknown patents' and the goal is 'everybody 
happy'. I really hope Apple will accept that 'submarine patents' are a 
risk of doing business, just as I still go to work each day even though 
I could get hit by a bus.


Shannon



Re: [whatwg] The political and legal status of WHATWG

2007-12-19 Thread Shannon

Jim Jewett said:

Perhaps more importantly, page authors should be able to rely on the spec.


As a web author I have *never* relied on the HTML, ECMAScript or CSS 
specs. What I do is look on 'A List Apart', 'htmlhelp.com' and tutorials 
spread around the web to see the current state of browser support. This 
is my reality as a designer and I do not expect HTML5, in any form, will 
change that.


The standard should be aimed at forming a general consensus on what 
browsers *should* do in the hope that those that can comply - will. For 
instance there are many open-source browsers and impartial content 
produces that may rally around a recommendation and this is all we can 
ever expect. It wouldn't be a good thing if those impartial groups had 
no recommendations to follow and therefore implemented different 
approaches. KDE, Xfce, Enlightenment and Gnome used to use different 
desktop config files. Once a standard was agreed on they became 
interoperable regardless of the fact neither Microsoft or Apple ever 
implemented it. We can't use not following a standard as an argument not 
to make one, especially when that standard is optional anyway. Can 
anyone guarantee that Microsoft, Apple or Nokia will fully comply with 
HTML5 if we don't recommend a video codec as some have requested?


Big vendors may have monopoly control now but who can say in the future? 
I've seen IE's share on some of my sites (aimed at wealthy businessmen, 
not geeks) drop from 98% to 75% over 2 years and it's still going down. 
The standard is aimed at current vendors but they are not the only 
stakeholders here. If they are then why does this list exist?



Tying this back to video codecs, it would be great if we could tell
authors "provide at least format X, if the browser can't support that,
it probably doesn't do video at all."  It would be bad if we told them
that and it wasn't true -- regardless of why it wasn't true.
This is why the video fallback argument is stalled. I have never asked 
for a *must* statement and a part of the spec that was *not* removed 
states:



 --


 3.14.8.1. Audio codecs for |audio
 <http://www.whatwg.org/specs/web-apps/current-work/#audio1>|
 elements

User agents may support any audio codecs and container formats.

User agents must support the WAVE container format with audio encoded 
using the PCM format.

-

I notice that nobody complaining about MUST in video have objected to 
this or this:


-
NOTE: Certain user agents might support no codecs at all, e.g. text 
browsers running over SSH connections.

-

Vendors can't say they are working towards a MUST codec while 
simultaneously acknowledging that some browsers won't support ANY 
codecs. Nor can you categorically state that all browsers will support 
PCM (like browsers for the deaf or Lynx).


Since there is some serious inconsistencies in the arguments being 
presented it is hard not to assume this is all just a stalling tactic in 
support of commercial ends (defacto adoption of h.264). I recommend the 
WHATWG works out a way to prevent commercial self-interest from moving 
us towards a royalty-free public standard. If my wealthier clients want 
to support other codecs that's fine but a recommendation in the spec 
gives me a better position to recommend they encode an Ogg fallback as well.


The argument that we are stuck on is: should we make *recommendations* 
in a standard that won't be followed by all vendors? I believe we should 
and apparently there are precedents for doing so.


Having said all that I don't want this thread to continue the video 
codec discussion. What I want is a clearer position statement  from 
WHATWG on the publics role in defining this specification.


Shannon


Re: [whatwg] The political and legal status of WHATWG

2007-12-19 Thread Shannon
What I want is a clearer position statement  from WHATWG on the 
publics role in defining this specification. 


Actually I should clarify that statement. The role of the public is 
defined but what is missing is a statement of rights and how those 
rights can be protected from belligerent members. A legal contract 
signed by members and defining rules and penalties for non-compliance 
would be a step in that direction. I don't think the public are prepared 
to accept promises anymore.


Shannon


[whatwg] Proposal for a link attribute to replace

2008-02-27 Thread Shannon
With the capabilities of modern browsers it seems to me that a specific 
tag for hyperlinks is no longer required or useful and could be 
depreciated with a more versatile global "link" attribute. I believe 
that hyperlinks now have more in common with attributes such as ONCLICK 
than they do with tags since in web applications links often define 
actions rather than simply being a part of the document structure. The 
 tag would continue its role as an anchor but the HREF attribute 
would be phased out making  a more consistent element (since links 
and anchors are really quite separate concepts). Below is an example of 
the proposed link attribute in action:


Foo

could be written as:

Foo

No useful semantic information is lost however the markup is cleaner and 
the DOM drops an unnecessary node (which could speed up certain 
applications).


---LINK with block-level or interactive content---
This proposal would circumvent 's main limitation which is its 
requirement to not wrap block-level elements or 'interactive' content. 
The HTML5 draft requires it wrap 'phrasing content' (essentially 
paragraphs) and not wrap 'interactive' content (such as other 
hyperlinks) however I see no reason why a link attribute should require 
these limits. Links would simply cascade as in the following example:



   
  A
  B
  C
  D
   


In the example above clicking anywhere on the table except 'C' brings up 
a generic page, whereas 'C' has dedicated content. The following nested 
links would also be valid:


click anywhere on this line except link="bar.html" title="Go to bar instead">here to visit foo.


---LINK and TITLE attribute---
The link attribute could coexist with the the TITLE attribute for 
describing links on non-textual objects. This is consistent with TITLE 
on :


title="See more monkies">


---LINK and ONCLICK---
This attribute can easily coexist with ONCLICK. The behaviour would be 
identical to ONCLICK on 
With scripts enabled: The onclick handler runs first and the link is 
followed if onclick returns true.
With scripts disabled: The link is followed. This also makes the link 
attribute a useful fallback when scripts are disabled. Example:


Foo

---LINK and Scripting---
The link attribute would make adding hyperlinks to DOM nodes easy:

node.link = 'http://foo.bar.baz'; /* Create a hyperlink on an element */
nodes_with_a_link = document.getElementsByAttribute('link'); /* Get all 
links. This method doesn't exist in the draft but can be written in 
javascript using iterators */


---LINK and Forms---
To avoid confusion the use of link on a form widget would either have no 
effect or be invalid.


---LINK and DOCTYPE---
The link attribute currently has meaning in pre-HTML4 documents as an 
attribute of the body tag (to define link color). Since this use has 
been long depreciated it should be alright for HTML5 to redefine it. To 
prevent issues with legacy pages browsers should only respect the link 
attribute when the DOCTYPE supports it or if no doctype is present 
browsers should allow link on all elements except .


---LINK and CSS---
Elements with hyperlinks would be styled using CSS attribute selectors 
(by the time HTML5 is ready all HTML5-capable browsers should handle 
these). The syntax would be standard CSS2:


*[link] {color:blue;} /*All links are blue*/
*[link]:visited {color:purple;} /* visited links are purple */
table[link] {background-color: #FF;} /* hyperlinked tables have a 
pale red background */


I believe a link attribute would be a significant improvement to HTML. 
The only reasons I can think of not to add this would be the added 
complexity for browsers and authors during the transition period. The 
advantages include less markup, simpler DOM structure, nested 
hyperlinks, onclick fallbacks and better consistency in the spec. Being 
such a common element web authors will probably keep using  for 
many years to come regardless of the standard but that should not be a 
problem since  and link should coexist quite easily in valid 
HTML. Once awareness has spread then future drafts could depreciate the 
href attribute on anchors.



Shannon


Re: [whatwg] Proposal for a link attribute to replace

2008-02-28 Thread Shannon

Markus Ernst wrote:
> Anyway, why do you suggest a new attribute rather than making the 
existing href attribute global?
  
Because I think some current and depreciated tags still use href for a 
different purpose ( for one). A global attribute should be unique. 
I don't really mind what the attribute is called.



Anne van Kesteren wrote:
> We have a FAQ entry on this -- quite common -- request:
>
> 
http://wiki.whatwg.org/wiki/FAQ#Does_HTML5_support_href_on_any_element_like_XHTML_2.0.3F

>
> Hope that helps!

I'm happy to see it's a common request but I really hope the FAQ entry 
doesn't represent a final decision. I strongly disagree with its 
conclusions, so I'll address each:



FAQ: * It isn't backwards compatible with existing browsers.

Not entirely true. I quote from the same FAQ:

What about Microsoft and Internet Explorer?
HTML 5 is being developed with IE compatibility in mind. Support for 
many features can be simulated using JavaScript.



So 'backwards-compatibility', as defined by the same document, can be 
achieved by using javascript to walk the DOM and add 
'window.location(node.getAttribute('link'))' to the onclick handler of 
any nodes with a link attribute. I have done a very similar thing before 
to implement :hover on non-anchor elements in IE. Of course an author 
wouldn't have to use this new attribute at all so 
backwards-compatibility is the designers choice, not an issue with the 
proposed attribute.



FAQ:  * It adds no new functionality that can't already be achieved 
using the a element.


Absolutely not true. A global attribute offers several features that  
does not - most importantly nested links and the ability to hyperlink 
block and interactive elements without breaking validation.



FAQ:  * It doesn't make sense for all elements, such as interactive 
elements like input and button, where the use of href would interfere 
with their normal function.


As long as the spec is clear about which actions take precedence then 
this is not an issue. The spec should assume that if an author puts a 
link on a form element then they are *deliberately* interfering with its 
normal function. Trying to protect authors/users from their own bad 
choices is a very 'Microsoft' way of thinking and not really appropriate 
for spec targetting web authors. There might be good reasons for doing 
this that are not immediately obvious.



FAQ: * Browser vendors have reported that implementing it would be 
extremely complex.


I find this claim incredible. How is a global link/href any more 
difficult than the existing implementations of onmouseup/down/whatever? 
It's basically the same thing - only *simpler* (no scripting, events, 
bubbling, etc).



So on all counts I find the claims in the FAQ incorrect and urge the 
WHATWG and browser vendors to reconsider the inclusion of a global link 
or href attribute.


Shannon



Re: [whatwg] Proposal for a link attribute to replace

2008-02-28 Thread Shannon

Pawe? Stradomski wrote:
> W li?cie Shannon z dnia czwartek 28 lutego 2008:
>How should nested links
> work? Suppose I put href="http://a"; on  element and 
href="http://b"; on a

>  inside that . What should happen when the user clicks on that
> ? That's the reason why nested 's are forbidden by HTML 4 and
> XHTML 1.
>
> I'm not against href on every element, but then nesting elements with 
href

> attribute should be forbidden. Similarly href should be forbidden on
> interactive elements (buttons etc.), so making it global would be a 
problem.


Browsers were a lot more primitive back then. I have used nested 
onclick() handlers in the real world and had  no problems, nor did my 
users. I have also safely used onclick on form elements. The browser 
always knows which element is directly under the mouse (that's why 
:hover works). Only the link directly under the mouse should trigger. 
Again, this is behaviour that onclick and :hover already perform in all 
major browsers. As I've said before, href should not be forbidden on 
interactive elements, the spec should define the event hierarchy, eg:  
event->input->link/href



Chris wrote:
>
> Tables should be used to present tabular data. 
> Tabular data is something the user may want to meditate or to copy;

> their content cannot be grasped in a glance.
> Hyperlinked text should be a concise description
> of the content of the other page;
> a table does not meet that requirement.
> This design violates the least surprise principle.
>
> ...
>
> It would be difficult to style a hyperlink within a hyperlink;
> moreover, a hyperlink that contains another hyperlink is not concise,
> see above.

You'll never eliminate bad design by putting these limitations in the 
spec. What you'll do is make bad designers work around them (using 
onclick() or ...). I've seen it happen before more 
times than I can count. Bad designers don't validate their code either. 
On the other hand designers who do know what they are doing can be 
artificially restricted or forced into non-compliance by these sorts of 
rules. What "makes sense" is really a matter of context (or clients), 
and cannot be discussed in abstract like this. The point is there _are_ 
situations I have experienced myself where link/href would be a better 
alternative than  or  onclick() and situations where nested 
interaction is useful and still makes sense to the user. Finally, style 
should follow the stylesheet rules like everything else, nested or not. 
There is not a strong case for making the browser distinguish the 
boundaries between nested links if the designer chooses not to.



Geoffrey Sneddon wrote:
> While yes, you could rely on something like that, it totally breaks 
in any user agent without scripting support. Nothing else, to my 
knowledge, in HTML 5 leads to total loss of functionality without 
JavaScript whatsoever.


Well nothing except global/session/database storage, the "irrelevant" 
attribute, contenteditable, contextmenu, draggable, the video and audio 
elements, canvas and the connection interface. Some of these can't even 
be done in Javascript. Despite the efforts of this group 
backwards-compatibility is ultimately the authors responsibility. eg:



foo


foo

> Nothing else reinvents the wheel for something with which we already 
have a perfectly fine solution already.
  
If it were perfectly fine designers wouldn't be asking for this. My view 
is that  and onclick do not solve all cases where hyperlinks are needed.



Overall I'd say that most objections to a global href tend to focus on 
all the bad things a designer could do with it. However I deal with many 
designers, good and bad, and I can categorical state that the bad ones 
have no qualms about mangling pages using the Javascript, plugins and 
non-compliant markup already at their disposal. Everytime you deny them 
a feature to make them "behave" you just encourage them to go out and 
write another hack. Making the language more flexible will not make 
things any worse. Using bad designers as a reason to deny features is 
basically an argument for aborting all work on new HTML5 features. I 
could name 100 ways to abuse the storage proposals yet we persist 
because they are useful. Global href is useful, and it should be 
discussed on the basis of correct usage and what its behaviour should be 
(ie, my first post).



Shannon




Re: [whatwg] Thoughts on HTML 5

2008-03-03 Thread Shannon



Dnia 01-03-2008, So o godzinie 19:36 -0800, Nicholas C. Zakas pisze:

  

Perhaps it would better be named ?



"Aside" is customary in dialogue annotations, 
I have never seen any "callout".


Chris

  

Call it . It may sound crude but it's hard to mistake its meaning.

Shannon


Re: [whatwg] several messages about

2008-04-14 Thread Shannon


If we go with something like a TYPE attribute, I hope we can give it a 
better name. However, hiding semantics inside the value of an 
attribute is a poor markup design in humble opinion. (Although it also 
has some advantages.)
  
It's subclassing: the general is sufficient, the specific better. Many 
markup languages use the design, and in this case, I think it's 
necessary.



The class="" attribute can handle this case.

  


I've seen a few suggestions now that class be used as an identifying 
attribute for purposes other than CSS. While this seems logical it 
raises some issues for designers and implementers. Consider the following:


The Neutronium Alchemist

In this example which of these classes is the type, and which serve only 
as style? A type or rel attribute is the better solution since it is 
generally understood to have a single value.  is an option but as 
others have pointed out it leads to potentially millions of new tags.


There is also the issue to consider that website "developers" and 
website "designers" are usually a totally different species. Designers 
often have little understanding of how classes may be used in an 
application. The potential is high that the designer will use 
class="book" on a totally unrelated element which is bound to cause 
visual problems if the application/developer is using the class as a 
program element.


My proposed solution is to use the rel attribute which basically serves 
this purpose already. It also has less potential for conflicts than  the 
type attribute since I have only ever seen rel used in the header 
whereas type has existing meaning for input fields and script tags.


The Neutronium Alchemist

Shannon


Re: [whatwg] several messages about

2008-04-14 Thread Shannon


All of them. "class" isn't intended for styling, it's intended to subclass 
elements. 
Regardless of the intention of the class element it is NOT used in the 
real world to subclass anything but styles and custom script. We may 
wish otherwise but that is irrelevant. The value of class to me is:


* To get style information out of the content stream.
* To allow the re-use and grouping of style information.
* To provide alternate styles for printing, or user choice.
* To identify related elements to javascript code.

If class attribute was supposed to represent logical relationships or 
groupings in the information/content then it has already failed.


The subclassing can then be used for style, but using 
presentational classes (that is, classes that describe the desired 
presentation instead of the reason for that desire, the subclass of the 
element) misses the point.


For example, saying class="blue" with an associated .class { color: blue } 
will quickly become confusing if you decide those things should be red. It 
is in fact no better to use CSS that way than to use .


  
Agreed, I would personally never use class="blue". It was intended for 
the example only so the distinction I was making between "types" and 
"styles" was obvious. Designers are not used to thinking about these 
things. They'll use whatever Dreamweaver offers them regardless of the 
purpose for the information they are styling. Regardless of the 
enlightened opinion here designers will continue to do "what works" 
rather than what "makes sense". You won't change that with a spec (that 
most won't ever read). I've done a 2-year formal course in 
computer-aided design and these things were just not taught.




All the people involved in the developement of a site and its style sheets 
should of course agree on a set of class names.
  
In a perfect world, yes. In reality the people involved may not even 
work for the same company. I can see a situation arising where the 
"meaning" of classes are being assigned by a company like Google for use 
with their crawler but those classes are already be in use for 
presentation purposes. How will the crawler know which uses are 
intentional and which are not. How will the designer know which classes 
are "reserved", when the system that will use them may not even exist yet.
  
I strongly disagree with the characterisations of the class attribute 
in this example


As do I but that isn't relevant to the problem. If you feel that class 
should have a purpose other than it's widely used ones (styles and JS) 
then HTML5 must provide an alternative for these uses. I for one do not 
intend to use inline styles at all as I prefer keeping the design 
decisions outside of the HTML. That means you'll need to give me a list 
of all "meaningful" classes that might be used with  and other 
elements - but of course nobody can.


On the other hand except for rel="stylesheet" the rel attribute does not 
have these encumbrances and so deserves consideration.



Shannon


Re: [whatwg] several messages about

2008-04-15 Thread Shannon


Ironically (given that you proposed using rel="" instead) as far as I know 
Google has never based anything on class values, but has used rel="" 
values (like rel="nofollow").


Which indicates to me that they were concerned enough about 
class="nofollow" to not use it. I personally think that "nofollow" is 
not a (rel)ationship and probably a misuse of that element. Anyway I'm 
not fixed on rel, it could be any name as long as it isn't type or 
class. It could be argued that conceptually "type", relationship" and 
"class" are three words that all mean exactly the same thing (the 
relationship of an object to its environment) but we have them all now 
and all apparently serving different purposes. Adding another attribute 
like category="movie" probably won't make things any easier.


For that reason I believe rel= for categories that "do" something and 
class= for categories that need styles/js is enough of a distinction as 
it helps keep designers and developers out of each others way.


As do I but that isn't relevant to the problem. If you feel that class 
should have a purpose other than it's widely used ones (styles and JS) 
then HTML5 must provide an alternative for these uses.



I don't understand why you think it's an alternative use. All of these 
uses are subclassing the element, the styling and scripting is then hookd 
on those subclasses.


  


It's alternative because it attempts to actually "classify" something 
rather than generically label it. I agree that class should only do the 
first and I do this with my own code but most designers do not. As far 
as the web design world is concerned class serves no purpose except as a 
JS/CSS hook. If you give class="book" or class="movie" special meaning 
or behaviour then you run the risk of clashing with existing stylesheets.


Right now the mainstream web is "misusing" class. If you suddenly make 
class meaningful then some sites are going to get stung and not 
necessarily at any fault of their own - since the intellectual 
distinctions between "labels" and "classes" is of no concern to somebody 
putting pretty borders on a page.


Shannon


Re: [whatwg] several messages about

2008-04-15 Thread Shannon

Ian Hickson wrote:



We're not talking about making class meaningful. I'm not sure I understand 
what you are arguing against at this point.


The proposal is just that authors should use class="" to distinguish the 
various ways they use  so that they can (e.g.) style them differently. 
Where is the spec unclear? I should rewrite it to avoid any ambiguities.
  


The spec is fine. I was referring to the discussion about adding a TYPE 
attribute for . Repeated below.



Anne van Kesteren wrote:
  

> > Ian Hickson wrote:
> > 

> > > > Then would you want different markup for book titles, movie 
> > > > titles, play titles, song titles, etc?  Or would you just expect 
> > > > the script to search IMDB for anything marked up with ?

> > > 
> > > Again, I don't really know. I could see a use case for a "type" 
> > > attribute (as was suggested earlier in this thread), but that seems 
> > > like a slippery slope. Suggestions?
  
> > 
> > If we go with something like a TYPE attribute, I hope we can give it a 
> > better name. However, hiding semantics inside the value of an 
> > attribute is a poor markup design in humble opinion. (Although it also 
> > has some advantages.)

> 
> It's subclassing: the general is sufficient, the specific better. Many 
> markup languages use the design, and in this case, I think it's 
> necessary.
  


The class="" attribute can handle this case.


We appear to be talking about "lookups", "script", "semantics" and 
"markup" here rather than "style"; presumably to create custom link 
behaviours and assist in automated document processing. Perhaps there is 
an assumption that processing will only occur within the scope of the 
current page or site (and presumably therefore under the control of a 
single entity). However if  were to have a type then it's likely 
that the first systems to take advantage of it would be search-engines 
and catalogues. I feel that class should not be suggested as an 
alternative to a type attribute because it is completely unreliable for 
this usage, for reasons I won't repeat.


Using a type/rel/category attribute instead of class will assist in 
automated document processing and categorisation. It doesn't really 
matter whether a list of allowed types is defined or not since a 
search/directory crawler would probably deal with the uncommon or 
unsupported exceptions. But lumping the type of citation in with the 
class names used to style it is simply asking for trouble since it will 
also trigger any defined styles (probably unintentionally) and/or create 
nonsense categories like "book_class" in the processors' DB. I could 
imagine such a situation leading to the following catalogue output:


This article contains:
- 4 book citations
- 2 book_class citations
- 1 squiggly_underline citations

Hope that makes my position on this clearer. If I misunderstood 
somebodies comments then I apologise.


Shannon


Re: [whatwg] ALT and equivalent representation

2008-04-18 Thread Shannon

RE: Comments by Phillip Taylor and Bill Mason regarding alt=""

You both raise some excellent points. Logically alt should be optional 
since as you clearly demonstrate some things have no alternate textual 
meaning (at least not one of any value to the user). The trouble with 
alt="" (or no alt) is the unfortunate but extremely common tendency for 
designers to simply ignore the small percentage of people that need alt 
tags to access the internet. Clients will generally shop around for a 
web company that offers the lowest prices to provide the flashiest 
designs. There's a tendency for the lowest bidder to take shortcuts that 
the client will never "see", alt tags being one of these. To make 
matters worse some browsers display the alt tag while waiting for images 
to come from the server and this creates visual artifacts that designers 
and clients generally consider undesirable.


The end result of this is that alt tags tend to be seen as a burden by 
the majority of web designers I've met. The ONLY reason they get used at 
all is because validators complain about them not being included and 
because SEO companies are trying to stuff more keywords into the page. I 
often spend a considerable amount of time inserting alt tags that other 
designers consider optional. It is a debatable point whether these tags 
are a personal whim or an essential part of the contract. Essentially 
without some guidance from the specification it is my client who pays 
for my "charity" to disadvantaged users. I know that in most cases blind 
users do not form a significant enough percentage of their clientele to 
affect profits (it may be a art gallery for example). Also these are not 
government sites or contractors with mandated accessibility, and as far 
as I know there is no law requiring corporate sites to provide 
alternative text for blind users.


The ONLY "business" justification I have for using alt tags is that a 
w3c valid site REQUIRES them and this may increase the sites Google rank 
(which is just speculation really). If you take the requirement out to 
use them on every image in a valid site then you take away much of my 
argument for using them at all.


I think this is a case where logic must give way to corporate 
consideration, as public and charitable sites would probably use alt 
tags without being told, but 95% of the mainstream internet will not - 
given half a chance.



Shannon


Re: [whatwg] ALT and equivalent representation

2008-04-19 Thread Shannon

Henri Sivonen wrote:


Instead of having a layer of validitity speculation in between, 
couldn't you make the point that alt helps with SEO? To me linking alt 
and SEO directly is more to the point and more honest, too.




Whoa, don't do that! They'll just insist on you stuffing 100 characters 
worth of keywords in there. You can add insanity to the problems facing 
blind users on the web!


Shannon


Re: [whatwg] ALT and equivalent representation

2008-04-20 Thread Shannon

What about this as a possible solution?






I don't think this would raise any serious implementation issues as the 
logic is quite simple; If all elements in an altgroup are unavailable 
then display the value of the altgroup tag. The alt attribute would then 
be optional where altgroup is defined but required in all other cases.


Shannon


Re: [whatwg] ALT and equivalent representation

2008-04-20 Thread Shannon

Shannon wrote:

What about this as a possible solution?







I don't think this would raise any serious implementation issues as the  
logic is quite simple;



Bill Mason wrote:
I think it would be more logical for the specification to support the 
common, existing, reasonable authoring practices than go through the 
expense of introducing both a new attribute and a new element.


Yes this proposal requires a new tag and attribute but that is a lot 
less disruptive than giving designers an easy way to opt out of 
accessibility (while still claiming compliance). I'd like to believe 
that designers would do the right thing without being told but I know 
for a fact most of them don't. The alt requirement for w3c validation is 
what got me using them in the first place so I know it's having some effect.



Smylers wrote:


What advantage does it have over Simon's proposal?

Simon's suggestion has the obvious advantage that it already works with
current browsers.

Smylers


Simon's suggestion is no different from the original proposal, the idea 
that alt can be optional on some images. I've already explained why I 
consider that a dangerous step backwards from an accessible web. 
Fallback for current browsers is something I overlooked but it is easy 
to do:



src="hippo_tail.png" altgroup="hippo">


With the alt simply being overridden by altgroup in a HTML5 browser.


Shannon


Re: [whatwg] ALT and equivalent representation

2008-04-21 Thread Shannon

Benjamin Hawkes-Lewis wrote:


I think you've misunderstand Simon's suggestion, which was:

Rating: 

Note /all/ the img elements have alt attributes, the point is the 
alternative text for the group is expressed by the first alt 
attribute. It's thus actually the same as the fallback you propose:


Not the same thing at all. There is no direct association between the 
elements so there is no way a validator or browser would know the 
difference between a missing/empty alt and an optional one - thus making 
ALL use of alt optional as far as formal validation is concerned. If you 
are implying a group can be denoted by being at the same block level or 
in the correct order in the stream (no intervening images) then I doubt 
that would work in practice.


Shannon


Re: [whatwg] ALT and equivalent representation

2008-04-21 Thread Shannon

Benjamin Hawkes-Lewis wrote:


But whether we need a mechanism for denoting differing img elements 
combine to form a single image is a very different question from 
whether alt should be optional or required. You seem to be conflating 
them.




How can  or  not be related to making alt optional?

Both represent a total lack of information with no explicit relationship 
to any other element. There is no way a parser can resolve whether this 
information is supplied previously or not. If the parser can't tell then 
it can't validate the alt requirement - thereby mandating that alt (that 
is the text, not the empty attribute) be optional for a conforming 
document (as far as a validator knows anyway). Once alt text becomes 
optional then it is likely that many designers/templates/wysiwyg 
applications will simply insert alt="" into every image to pass 
validation without consideration for blind users. It is this situation I 
am trying to avoid. A valid document should provide valid alt 
information, not empty ones. An altgroup supports this - empty alt tags 
do not.


Shannon



Re: [whatwg] Proposal for a link attribute to replace

2008-05-30 Thread Shannon
There's a lot of focus on "use cases". Here is the one that led me to 
start this thread:


http://www.duttondirect.com/automotive/for_sale (disclaimer: I am not 
responsible for the design of this page)


The table hover effect is not easily acheived without global href. My 
client likes it, the users like it and it is perfectly obvious 
navigation (despite being non-standard). At the moment I am acheiving 
the effect with event bubbling but I consider this approach to be 
bloated, ineligant, prone to breakage and lag on slower devices. It also 
suffers from the poor compatibility of the event.button property 
(activates on right/middle-click instead of just left). Nonetheless it 
improves the ease of navigation for most users.


A global href would allow me too turn the whole mess of event code into:

 ... 

... and all the issues I just mentioned would vanish.

People on this list should be very careful about claiming properties and 
tags will be abused. Bad interfaces exist already and often as a result 
of missing behaviours in the standard. Wrapping tables and block content 
in  is just one example (it works, believe it or not). Trying to 
force designers into better layouts by denying features is stupid. It 
will simply drive them into invalid layouts, Javascript, Flash or 
Silverlight where they are free to make even bigger mockeries of 
standards and interface conventions. As far as designers are concerned 
HTML5 is a *competitor* to these technologies. If you cannot compete in 
terms of features and ease of use you'll end up with a proprietary web.



In summary then:

Is global href going to create bad layouts?
Depends. Skilled UI designers can improve their layouts - bad designers 
can make theirs worse.


Is global href a burden on browser vendors?
Unlikely. It's behaviour is nearly identical to 
onclick="window.location=foo" which is already supported on the majority 
of modern browsers except Lynx.


Is denying designers features they want going to increase standards 
compliance?

No. It will reduce compliance.

Regards,
Shannon




Re: [whatwg] TCPConnection feedback

2008-06-17 Thread Shannon




ISSUE.2) We now only send valid HTTP(s) over HTTP(s) ports.
  


I understand the reasoning but I do not believe this should be limited 
to ports 80 and 443. By doing so we render the protocol difficult to use 
as many (if not most) custom services would need to run on another port 
to avoid conflict with the primary webserver. I understand the logic for 
large public sites where "fascist firewalls" might prohibit other ports 
but for custom services (ie, remote-control, telnet emulation or 
accounting access) used within a company network this would be a real 
pain (requiring setup of reverse proxy or dedicated server). This would 
make a mockery of the whole premise of implementing services using "a 
few lines of perl".


Limiting to ports 80 and 443 doesn't really solve the security issues 
anyway. Many firewall/routers can be configured on this port and 
denial-of-service is just as effective (or more) against port 80 as any 
other port. The new proposal to use HTTP headers effectively allows 
arbitrary (in length and content) strings to be sent to any port 80 
device without informing the user. This would allow a popular page (say 
a facebook profile or banner ad) to perform massive DOS against web 
servers using visitors browsers without any noticeable feedback (though 
I guess this is also true of current HTTPXMLRequestObjects).


I propose that there be requirements that limit the amount and type of 
data a client can send before receiving a valid server response. The 
requirements should limit:

* Length of initial client handshake
* Encoding of characters to those valid in URIs (ie, no arbitrary binary 
data)

* Number or retries per URI
* Number of simultaneous connections
* Total number of connection attempts per script domain  (to all URIs)

There should also be a recommendation that UAs display some form of 
status feedback to indicate a background connection is occurring.



HIXIE.3) No existing SMTP server (or any non-TCPConnection server) is going
to send back the appropriate handshake response.
  


It is always possible that non-http services are running on port 80. One 
logical reason would be as a workaround for strict firewalls. So the 
main defense against abuse is not the port number but the handshake. The 
original TCP Connection spec required the client to send only "Hello\n" 
and the server to send only "Welcome\n". The new proposal complicates 
things since the server/proxy could send any valid HTTP headers and it 
would be up to the UA to determine their validity. Since the script 
author can also inject URIs into the handshake this becomes a potential 
flaw. Consider the code:


tcp = TCPConnection('http://mail.domain.ext/\\r\\nHELO HTTP/1.1 101 
Switching Protocols\\r\\n' )


client>>
OPTIONS \r\n
HELO HTTP/1.1 101 Switching Protocols\r\n
HTTP/1.1\r\n

server>>
250 mail.domain.ext Hello \r\n
HTTP/1.1 101 Switching Protocols\r\n
[111.111.111.111], pleased to meet you

As far as a naive UA and mail server is concerned we have now issued a 
valid challenge and received a valid response (albeit with some 
unrecognised/malformed headers). The parsing rules will need to be very 
strict to prevent this kind of attack. Limiting to port 80 reduces the 
number of target servers but does not prevent the attack (or others like 
it).


It may be that simply stripping newlines and non-ascii from URIs is all 
that's required since most text-based protocols are line oriented 
anyway. It depends largely on how OPTIONS and CONNECT are interpreted.


One last thing. Does anybody know how async communication would affect 
common proxies (forward and reverse)? I imagine they can handle large 
amounts of POST data but how do they feel about a forcibly held-open 
by-directional communication that never calls POST or GET? How would 
caches respond without expires or max-age headers? Would this hog 
threads causing apache/squid to stop serving requests? Would this work 
through Tor?


Shannon


Re: [whatwg] TCPConnection feedback

2008-06-17 Thread Shannon

Ian Hickson wrote:

On Wed, 18 Jun 2008, Shannon wrote:
  

ISSUE.2) We now only send valid HTTP(s) over HTTP(s) ports.
  
I understand the reasoning but I do not believe this should be limited 
to ports 80 and 443.



You misunderstand; it's not the ports that are limited, it's just that the 
traffic can now pass for HTTP. This would all still work over any 
arbitrary port.



  
The current draft for TCPConnection is quite clear about this. The 
unclear part is what a "Security Exception" is (currently undefined).


 WHATWG HTML5 Draft -- 
http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.html 
-- Section 6.3.4  --

If either:

   * the target host is not a valid host name, or
   * the port argument is neither equal to 80, nor equal to 443, nor 
greater than or equal to 1024 and less than or equal to 65535,


...then the UA must raise a security exception.


HIXIE.3) No existing SMTP server (or any non-TCPConnection server) is 
going to send back the appropriate handshake response.
  

It is always possible that non-http services are running on port 80. One
logical reason would be as a workaround for strict firewalls. So the main
defense against abuse is not the port number but the handshake.



Indeed, we would need to very carefully define exactly what the server 
must send back, much like in the original protocol -- it would just look a 
lot more like HTTP. This would include at least one custom header or value 
that you wouldn't see elsewhere (e.g. the Upgrade: header with the magic 
value).
  
Since the script author can also inject URIs into the handshake this 
becomes a potential flaw.



Indeed, we'd have to throw if the URI wasn't a valid URI (e.g. if it 
included newlines).


  


I agree. Since the aim of  the URI injection is to get an echo of a 
valid header it is important that the server response include illegal 
URI components that a server would not otherwise send. Newline could be 
part of a legitimate response from a confused server or one that echos 
commands automatically, eg:


tcp = new 
TCPConnection('http://mail.domain.ext/Upgrade:TCPConnection/1.0' )


server>>
Upgrade:TCPConnection/1.0
Error: Unrecognized command.

Unlike my previous example this is a perfectly valid URI. Whatever the 
magic ends up being it should aim to include illegal URI characters, ie: 
angle-brackets, white-space, control characters, etc.. in an arrangement 
that couldn't happen accidentally or through clever tricks. ie:


Magic: \r\n

This example magic includes at least three characters that cannot be 
sent in a valid URI (space, left angle bracket, right angle-bracket) in 
addition to the newline and carriage returns.



One last thing. Does anybody know how async communication would affect 
common proxies (forward and reverse)? I imagine they can handle large 
amounts of POST data but how do they feel about a forcibly held-open 
by-directional communication that never calls POST or GET?



That's basically what TLS is, right? The simple solution would be to just 
tunnel everything through TLS when you hit an uncooperative proxy.


  


Not with a few lines of perl you don't.

Shannnon



Re: [whatwg] TCPConnection feedback

2008-06-18 Thread Shannon

Frode Børli wrote:


XMLHttpRequest only allows connections to the origin server ip of the
script that created the object. If a TCPConnection is supposed to be
able to connect to other services, then some sort of mechanism must be
implemented so that the targeted web server must perform some sort of
approval. The method of approval must be engineered in such a way that
approval process itself cannot be the target of the dos attack. I can
imagine something implemented on the DNS servers and then some digital
signing of the script using public/private key certificates.

  
Using DNS is an excellent idea, though I would debate whether the 
certificate is needed in addition the DNS record. Perhaps the DNS record 
could simply list domains authorised to provide scripted access. The 
distributed nature and general robustness of DNS servers provides the 
most solid protection against denial of service and brute-force cracking 
which are the primary concerns here. Access-control should probably be 
handled by the hosts usual firewall and authentication methods which is 
trivial once the unauthorised redirect issue is dealt with.


The biggest issue I see is that most UAs are probably not wired to read 
DNS records directly. This means adding DNS access and parsing libraries 
for this one feature. Having said that I can see a whole range of 
security issues that could be addressed by DNS access so maybe this is 
something that HTML5 could address as a more general feature. One 
feature that comes to mind would be to advertise expected server outages 
or /.'ing via DNS so the UAs could tell the user "Hey, this site might 
not respond so maybe come back later".


It is worth considering allowing scripts to access devices without said 
DNS rules but with a big fat UA warning, requiring user approval. 
Something like "This site is attempting to access a remote service or 
device at the address 34.45.23.54:101 (POP3). This could be part of a 
legitimate service but may also be an attempt to perform a malicious 
task. If you do not trust this site you should say no here.". This would 
address the needs of private networks and home appliances that wish to 
utilise TCPConnection services without having the desire or ability to 
mess with DNS zone files.




The protocol should not require any data (not even hello - it should
function as an ordinary TCPConnection similar to implementations in
java, c# or any other major programming language. If not, it should be
called something else - as it is not a TCP connection.

  
I agree completely. Just providing async HTTP is a weak use case 
compared to allowing client-side access to millions of existing 
(opted-in) services and gadgets.


Shannon



Re: [whatwg] TCPConnection feedback

2008-06-18 Thread Shannon



I think a major problem with raw TCP connections is that they would be
the nightmare of every administrator. If web pages could use every
sort of homebrew protocol on all possible ports, how could you still
sensibly configure a firewall without the danger of accidentally
disabling mary sue grandmother's web application?
  


This already happens. Just yesterday we (an ISP) had a company unable to 
access webmail on port 81 due to an overzealous firewall administrator. 
But how is a web server on port 81 more unsafe than one on 80? It isn't 
the port that matters, it's the applications that may (or may not) be 
using them that need to be controlled. Port-based blocking of whole 
networks is a fairly naive approach today. Consider that the main reason 
for these "nazi firewalls" is two-fold:
1.) to prevent unauthorised/unproductive activities (at schools, 
libraries or workplaces); and

2.) to prevent viruses connecting out.

Port-blocking to resolve these things doesn't work anymore since:
1.) even without plugins a "Web 2.0" browser provides any number of 
games, chat sites and other 'time-wasters'; and
2.) free (or compromised) web hosting can provide viruses with update 
and control mechanisms without creating suspicion by using uncommon 
ports; and
3.) proxies exist (commercial and free) to tunnel any type of traffic 
over port 80.


On the other hand port control interferes with legitimate services (like 
running multiple web servers on a single IP). So what I'm saying here is 
that network admins can do what they want but calling the policy of 
blocking non-standard ports "sensible" and then basing standards on it 
is another thing. It's pretty obvious that port-based firewalling will 
be obsoleted by protocol sniffing and IP/DNS black/whitelists sooner 
rather than later.


Your argument misses the point anyway. Using your browser as an IRC 
client is no different to downloading mIRC or using a web-based chat 
site. The genie of running "arbitrary services" from a web client 
escaped the bottle years ago with the introduction of javascript and 
plugins. We are looking at "browser as a desktop" rather than "browser 
as a reader" and I don't think that's something that will ever be 
reversed. Since we're on the threshold of the "Web Applications" age, 
and this is the Web Applications Working Group we should be doing 
everything we can to enable those applications while maintaining 
security. Disarming the browser is a valid goal ONLY once we've 
exhausted the possibility of making it safe.



Also keep in mind the issue list Ian brought up in the other mail.
Things like URI based adressing and virtual hosting would not be
possible with raw TCP. That would make this feature a lot less useable
for authors that do not have full access over their server, like in
shared hosting situations, for example.
  
I fail to see how virtual hosting will work for this anyway. I mean 
we're not talking about Apache/IIS here, we're talking about custom 
applications, scripts or devices - possibly implemented in firmware or 
"a few lines of perl". Adding vhost control to the protocol is just 
silly since the webserver won't ever see the request and the customer 
application should be able to use any method it likes to differentiate 
its services. Even URI addressing is silly since again the application 
may have no concept of "paths" or "queries". It is simply a service 
running on a port. The only valid use case for all this added complexity 
is proxying but nobody has tested yet whether proxies will handle this 
(short of enabling encryption, and even that is untested).


I'm thinking here that this proposal is basically rewriting the CGI 
protocol (web server handing off managed request to custom scripts) with 
the ONLY difference being the asynchronous nature of the request. 
Perhaps more consideration might be given to how the CGI/HTTP protocols 
might be updated to allow async communication.


Having said that I still see a very strong use case for low-level 
client-side TCP and UDP. There are ways to manage the security risks 
that require further investigation. Even if it must be kept same-domain 
that is better than creating a new protocol that won't work with 
existing services. Even if that sounds like a feature - it isn't. There 
are better ways to handle access-control for non-WebConnection devices 
than sending garbage to the port.


  

> [If a] protocol is decided on, and it is allowed to connect to any IP-address
> - then DDOS attacks can still be performed: If one million web
> browsers connect to any port on a single server, it does not matter
> which protocol the client tries to communicate with. The server will
> still have problems.



Couldn't this already be done today, though? You can already today
connect to an arbitrary server on an arbitrary port using forms,
, 

Re: [whatwg] TCPConnection feedback

2008-06-20 Thread Shannon
ewalls, authentication, host-allow, etc as appropriate. The only 
new issue TCPConnection or WebConnection introduce is the concept of an "non-user-initiated connection". In other words 
a remote untrusted server causing the local machine to make a connection without an explicit user action (such as 
checking mail in Outlook). I believe the proposed DNS extension combined with some form of explicit user-initiated 
priviledge elevation reduce the two main threats: DDOS and browser-based brute-force attacks.




How would WebSocket connections be more harmful than something like

setInterval(function(){
  var img = new Image();
  img.src = "http://victim.example.com/"; <http://victim.example.com/> + 
generateLongRandomString();
}, 1000);

for example would?

 It's more harmful because an img tag (to my knowledge) cannot be used to
brute-force access, whereas a socket connection could. With the focus on
DDOS it is important to remember that these sockets will enable full
read/write access to arbitrary services whereas existing methods can only
write once per connection and generally not do anything useful with the
response.



What do you mean by brute-force access, and how could the proposed protocol
be used to do it. Can you provide an example?

Also, the proposed protocol will do a single HTTP request, just like the img
tag, and the response be hidden from the attacker if it wasn't the right
response. From a potential attacker's point of view, this is a write once
per connection where the only control they have over the request is the
value of the url. Attacking with this protocol is identical to attacking
with an image tag in every way that I can think of.


I already have already provided two examples in previous posts but to reiterate quickly this protocol as currently 
described can be manipulated to allow a full challenge-response process. This means I can make every visitors browser 
continually attempt username/password combinations against a service, detect when access is granted, and continue to 
send commands following the handshake. IMG and FORM allow at most a single single request to be sent before closing the 
connection and generally return the data in a form that cannot be inspected inside javascript. I have shown that by 
injecting a custom URI into the handshake I can theoretically force a valid server response to trick the browser into 
keeping the connection open for the purpose of DDOS or additional attacks. The difference is not in the ability to DDOS, 
it's in the ability to maintain a connection in the presense of a server challenge, despite WebSockets proposed 
safeguards (keeping in mind these proposed safeguards render the protocol useless for accessing legacy devices).



Shannon

PS (to list): Sorry if this post generates yet another thread. Thunderbird keeps truncating my replies in a way that 
seems to invalidate the thread history. I might need to shift to a proper newsreader.


Re: [whatwg] Web Sockets

2008-07-21 Thread Shannon
In order to understand this issue better I did some preliminary research 
into how HTTP and common implementations currently support the five 
primary requirements of the WebSocket/TCPSocket proposal; namely 
persistence, asynchronism, security, shared hosting and simplicity. 
After reading http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html I'm 
starting to suspect that both systems can be fully implemented without a 
new connection protocol.


Firstly, according to rfc2616 "In HTTP/1.1, persistent connections are 
the default behavior of any connection."


The other thing about persistent HTTP/1.1 connections is that they are 
already asynchronous. Thanks to pipelining the client may request 
additional data even while receiving it. This makes the whole websockets 
protocol achievable on current HTML4 browsers using a simple application 
or perl wrapper in front of the service ie:


service <--> wrapper <--> webserver (optional) <--> proxy (optional) 
<--> client


a simple pseudo-code wrapper would look like this:

wait for connection;
receive persistent connection request;
pass request body to service;
response = read from service;
response_length = length of response;
send Content-Length: $response_length;
send $response
close request or continue

A threaded wrapper could queue multiple requests and responses.

In theory (as I have yet to perform tests) this solution solves all 
websocket goals:


Simple: Can use CGI (taking advantage of webserver virtual-hosting, 
security, etc...) or basic script wrapper

Persistent: HTTP/1.1 connections are persistent by default
Asynchronous: Requests and responses can be pipelined, meaning requests 
and responses can be transmitted simultaneously and are queued.
Backwards-compatible: Should work with all common HTTP/1.1 compatible 
clients, proxies and servers.
Secure: To exploit a service you would require CGI or dedicated 
application. ISPs tightly control access to these. SSLis easy to 
implement as a tunnel (ie. stunnel) or part of  existing webserver.
Port sharing: This system can co-exist with existing 
webserver/applications on same server using CGI, transparent proxy or 
redirection.


Obviously some real-world testing would be helpful (when I find the 
time) but this raises the question of whether websockets is actually 
necessary at all. Probably the only part HTML5 has to play in this would 
be to ensure that Javascript can open, read, write and close a 
connection object and handle errors in a consistent manner. The 
handshaking requirement and new headers appear to complicate matters 
rather than help.



Shannon


Re: [whatwg] Web Sockets

2008-07-22 Thread Shannon
The current proposal spells out the URI/path parsing scheme. However 
this should be treated EXACTLY like HTTP so the need to define it in the 
spec is redundant. It is enough to say that the resource may be 
requested using a GET or POST request. Same with cookie handling, 
authorization and other HTTP headers. These should be handled by the 
webserver and/or application exactly as normal, there is no need to 
rewrite the rules simply because the information flow is asynchronous.


6.) Data framing specification

Redundant because HTTP already provides multiple methods of data segment 
encapsulation including "Content-Length", "Transfer-Encoding" and 
"Content-Type". Each of these have sub-types suitable for a range of 
possible WebSocket applications. Naturally it is not necessary for the 
client or server to support them all since there are HTTP headers 
explicitly designed for this kind of negotiation. The WebSocket should 
however define at least one fallback method that can be relied on (I 
recommend "Content-Length",  "Transfer-Encoding: chunked" and 
"Content-Type: multipart/form-data" as MUST requirements).


7.) WebSockets needs a low-level interface as well

By "dumbing down" the data transfer into fired events and wrapping the 
data segments internally the websocket hides the true communication 
behind an abstract object. This is a good thing for simplicity but 
extremely limiting for authors wanting to fine-tune an application or 
adapt to future protocols. I strongly recommend that rawwrite() and 
rawread() methods be made available to an OPEN (ie, 
authenticated/handshaked) websocket to allow direct handling of the 
stream. It would be understood that authors using these methods must 
understand the nature of both HTTP and websockets. In the same way a 
settimeout() method should be provided to control blocking/non-blocking 
behaviour. I can't stress enough how important these interfaces are, as 
they may one day be required to implement WebSockets 2.0 on "legacy" or 
broken HTML5 browsers.


8.) Origin: / WebSocket-Origin:

Specifying clients allowed to originate a connection is a disaster 
waiting to happen for the simple reason that sending your origin is a 
privacy violation in the same vain as the referrer field. Any 
open-source browser or privacy plugin will simply disable or spoof this 
since it would allow advertising networks to track people by ad-serving 
via websockets. Such tracking undermines the security of anonymising 
proxies (as the "origin" may be a private site or contain a client id). 
Using origin as a required field essentially makes the use of "referrer" 
mandatory. If a websocket wants to restrict access then it will have to 
use credentials or IP ranges like everything else.


9.) WebSocket-Location

The scenario this is supposed to solve (that an application makes a 
mistake about what host it's on and somehow sends the wrong data) is 
contrived. What's more likely to happen is that a server application has 
trouble actually knowing its (virtual) hostname (due to a proxy, 
mod_rewrite, URL masking or other legitimate redirect) and therefore NO 
clients can connect. It isn't uncommon for the host value passed to a 
CGI script and the hostname returned by the environment (ie, via uname 
or OS library) to conflict. Then there is the matter of an SSL 
connection (no host header available). I'm having trouble determining 
why this should even matter. I suspect most simple applications/wrappers 
will just echo back the host header sent by the client so if a mistake 
is made it's likely to go unnoticed anyway.


10.) To close the Web Socket connection, either the user agent or the 
server closes the TCP/IP connection. There is no closing handshake.


HTTP provides a reliable way of closing a connection so that all parties 
(client, server and proxies) know why the connection ended. There is no 
reason for websockets to not follow this protocol and close the 
connection properly.



In conclusion, the current specification of WebSockets re-invents 
several wheels and does so in ways that are overly complex, error-prone 
and yet seriously limited in functionality. The whole concept needs to 
be approached from the position of making HTTP's features (which are 
already implemented in most UAs) available to Javascript (while 
preventing the exploit of non-HTTP services). I do not believe this is 
difficult if my recommendations above are followed. I do not wish to be 
overly critical without contributing a solution, so if there are no 
serious objections to the points I've made I will put time into 
reframing my objections as a compete specification proposal.



Shannon




Re: [whatwg] Web Sockets

2008-07-23 Thread Shannon


3.) If the resulting absolute URL has a  component, then let port 
be that component's value; otherwise, if secure is false, let port be 
81, otherwise let port be 815.


No, no, no! Don't let paranoia override common sense. Not all websocket 
applications will have the luxury to run on these ports (multiple web 
servers, shared host, tunnelled connections, 2 websocket apps on one 
host, etc...).



My mistake. I misread this as *requiring* port 81 and 815. It appears 
that is not the case. All ports are valid.


Shannon




Re: [whatwg] Web Sockets

2008-07-23 Thread Shannon



>
> 3.) If the resulting absolute URL has a  component, then let
> port be that component's value; otherwise, if secure is false, let
> port be 81, otherwise let port be 815.
>


I found this in rfc2817 section 1:

  The historical practice of deploying HTTP over SSL3 [3] has
  distinguished the combination from HTTP alone by a unique URI scheme
  and the TCP port number. The scheme 'http' meant the HTTP protocol
  alone on port 80, while 'https' meant the HTTP protocol over SSL on
  port 443.  Parallel well-known port numbers have similarly been
  requested -- and in some cases, granted -- to distinguish between
  secured and unsecured use of other application protocols (e.g.
  snews, ftps). This approach effectively halves the number of
  available well known ports.

  At the Washington DC IETF meeting in December 1997, the Applications
  Area Directors and the IESG reaffirmed that the practice of issuing
  parallel "secure" port numbers should be deprecated. The HTTP/1.1
  Upgrade mechanism can apply Transport Layer Security [6] to an open
  HTTP connection.


I believe we should rule out both new ports in favour of upgrading a 
port 80 connection to a WebSocket; however according to the same 
document the WebSockets proposal does not follow the expected 
client-side behaviour for doing so:


3.2 Mandatory Upgrade
If an unsecured response would be unacceptable, a client MUST send an 
OPTIONS request first to complete the switch to TLS/1.0 (if possible).


  OPTIONS * HTTP/1.1
  Host: example.bank.com
  Upgrade: TLS/1.0
  Connection: Upgrade


Nor does the WebSocket server supply a valid response:

3.3 Server Acceptance of Upgrade Request
As specified in HTTP/1.1 [1], if the server is prepared to initiate the 
TLS handshake, it MUST send the intermediate "101 Switching Protocol" 
and MUST include an Upgrade response header specifying the tokens of the 
protocol stack it is switching to:


  HTTP/1.1 101 Switching Protocols
  Upgrade: TLS/1.0, HTTP/1.1
  Connection: Upgrade

Obviously this is referring to TLS however WebSockets is also a protocol 
switch and should therefore follow the same rules.


I understand the reluctance to use a true HTTP handshake (hence the 
ws:// scheme and alternate ports) however I think the claims of added 
complexity on the server end are exaggerated (I say this as somebody who 
has written a basic standalone webserver). It seems to me we're only 
looking at required support for:


* Validating and parsing HTTP headers (that doesn't mean they are all 
understood or implemented, simply collected into a native 
structure/object/array)
* Handling (or simply pattern-matching) the Version, Upgrade and 
Connection headers
* Adding a Content-Length header before each message sent to the client 
and/or "chunk encoding" variable-length messages

* Sending and respecting the "connection close" message
* Sending "not implemented", "not authorised" and error status messages 
as needed.


Currently WebSockets requires practically all of these features as well, 
except that it implements them in non-standard fashion - effectively 
making asyncronous delivery via existing infrastructure (ie: CGI) a 
potentially more difficult and error-prone affair. In fact as it stands 
I would say the current proposal rules out both CGI and proxy support 
entirely since it cannot handle the addition of otherwise valid HTTP 
headers (such as Expires, X-Forwarded-For or Date) in the first 85 bytes.


Shannon




Re: [whatwg] Thoughts on HTML 5

2008-07-30 Thread Shannon

Ian Hickson wrote:

On Sat, 8 Mar 2008, Nicholas C. Zakas wrote:
  

From: Shannon <[EMAIL PROTECTED]>


Dnia 01-03-2008, So o godzinie 19:36 -0800, Nicholas C. Zakas pisze:


Perhaps it would better be named ?
  
"Aside" is customary in dialogue annotations, I have never seen any 
"callout".

Call it . It may sound crude but it's hard to mistake its 
meaning.
  

Oooh, I like that better.

@Chris - I understand what an "aside" is, I just know for a fact that 
most people do not. Shannon's suggestion of "note" makes much more sense 
to me than my suggestion of "callout".


Long live !



 and  aren't really generic enough. e.g. in HTML5  
would be used for both the notes and the examples in the spec, but  
would only sound like it was ok for the notes.


  
I think web developers would prefer ,  , 
 or .


Shannon


Re: [whatwg] element now working in Firefox nightlies

2008-07-31 Thread Shannon

David Gerard schrieb:



  

I'm sure Apple and Nokia can join the party at their leisure.
  

I assume the latest move by Mozilla (which I think is great, obviously)
won't do anything to address the IP concerns of mentioned players.




The "IP concerns" are blatant FUD and it's ridiculous to describe them
in any other terms.


- d.

  
Seconded. However I believe this debate has run its course previously. 
At least I haven't heard any news to the contrary. I think we all knew 
Mozilla would support Ogg regardless of the final spec.


I am curious about the status of Dirac support though, since it was 
apparently finalised in January. Is this being planned? Would any other 
vendors care to comment on Dirac?


Shannon


[whatwg] Joined blocks

2008-07-31 Thread Shannon
Something I think is really missing from HTML is "linked text" (in the 
traditional desktop publishing sense), where two or more text boxes are 
joined so that content overflows the first into the second and 
subsequent boxes. This is a standard process for practically all 
multi-column magazines, books and news layouts. It is especially 
valuable for column layouts where the information is dynamic and 
variable in length and therefore cannot be manually balanced. This is 
not something that can be solved server-side since the actual flow is 
dependent on user style-sheets, viewport and font-size.


For the sake of disambiguation i'll call this "joined blocks", since 
linking has its own meaning in HTML and the content need not be text.


I honestly don't know if this is too difficult to implement, however it 
has been a standard feature of publishing software such as Pagemaker and 
Quark Xpress for over 10 years.


The markup would be something like:

style="float:right">




When reflowing, block elements are moved as a whole. If the block won't 
fit then it follows the overflow behaviour of the column. Inline 
elements are split by line.


For backwards-compatibility it must be legal to split the markup over 
each group member (manual or best-guess balancing). However a HTML5 
compliant browser would reflow to other members as though the combined 
markup originated in box 1.


There are two ways to implement this proposal with respect to CSS.
1.) Rewrite the DOM with the new layout. Closing objects that were split 
and propagating attributes.

2.) Rewrite the CSS parser.

Method 1 is probably simpler but has serious issues with the id 
attribute - since it must be unique and therefore cannot propogate to 
both halves of a split object. It could also create undesirable 
behaviour with respect to :first-line, :before and other selectors that 
the author would expect to apply to the element only once. Method 2 
solves most of these issues but it would probably be a significant 
rewrite of current parsers.


I accept this proposal may be difficult to implement but its use case is 
significant with regards to articles and blogs, especially in an era of 
user-submitted content and wide screen layouts.



Shannon


Re: [whatwg] Joined blocks

2008-07-31 Thread Shannon
I agree this is _mostly_ a CSS issue except that there is semantic 
meaning to the join attribute beyond layout. The attribute could serve 
as a guide to search engines, web-scrapers or WYSIWYG applications that 
two areas of the page should be considered a single piece of content. I 
am also unsure as to how this might affect other aspects of browser, 
javascript or DOM behaviour. There may be other uses or side-effects I 
can't imagine. At any rate CSS cannot associate elements so the join 
attribute should be considered independent of the style considerations 
as a means of saying "this block follows that one". Nonetheless I will 
do as you suggest.


Shannon


Ian Hickson wrote:

On Fri, 1 Aug 2008, Shannon wrote:
  
Something I think is really missing from HTML is "linked text" (in the 
traditional desktop publishing sense), where two or more text boxes are 
joined so that content overflows the first into the second and 
subsequent boxes. This is a standard process for practically all 
multi-column magazines, books and news layouts. It is especially 
valuable for column layouts where the information is dynamic and 
variable in length and therefore cannot be manually balanced. This is 
not something that can be solved server-side since the actual flow is 
dependent on user style-sheets, viewport and font-size.



I agree that this would be a useful feature for the Web platform. However, 
I believe the CSS working group is a better venue for exploring such 
options. I recommend forwarding your proposal to [EMAIL PROTECTED]


  




Re: [whatwg] Joined blocks

2008-08-01 Thread Shannon

Tab Atkins Jr. wrote:


This is definitely and distinctly a CSS issue, not a HTML one.  The 
fact that the contents of an element flow into another box elsewhere 
in the page has nothing to do with the underlying structure of the 
data - it's still a single cohesive element, and thus in html it would 
be marked up exactly as normal.  You just happen to be displaying it 
differently.


The accuracy of your statement depends largely on whether the 
specification allows the content source to be defined across all joined 
blocks or only in the first. For example:


first parasecond para
... other unrelated markup ...
third para

This markup would be common when the author is trying to support legacy 
or non-CSS browsers. The join attribute allows supporting agents to know 
that conceptually the third para follows on from the second. This might 
be useful for text or audio browsers to correctly concatenate related 
sections of text and for search engines trying to demarcate meaningful 
areas of the page. I strongly recommend that HTML5 define the join 
attribute and then allow the CSS group to define its behaviour in 
relation to visual styles. The 'class' attribute sets a precedent for 
this as it is defined in HTML despite generally having no implications 
beyond being a style hook. CSS cannot currently target elements except 
by their structural alignment to others and in many cases the blocks to 
be joined won't have a simple relationship. Targetting the id of joined 
elements with the 'join' attribute is still required regardless of how 
the CSS rules are implemented and this is the correct forum for new HTML 
attributes.




I've got some ideas in this regard, but we should move it to the CSS 
list, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>.


Already done. The topic is currently waiting on moderation.


Shannon


[whatwg] WebWorkers vs. Threads

2008-08-09 Thread Shannon
I've been following the WebWorkers discussion for some time trying to 
make sense of the problems it is trying to solve. I am starting to come 
to the conclusion that it provides little not already provided by:


setTimeout(mainThreadFunc,1)
setTimeout(workThreadFunc,2)
setTimeout(workThreadFunc,2)


This is especially true if our main function sets up a "thread-safe" 
communication channel of some sort.


Obviously WebWorkers would make all this clearer and possibly easier but 
surely any number of free JS libraries could do that too. The only 
feature of WebWorkers that Javascript can't emulate is the ability for 
the worker to hang around after a page is closed; but I think this 
clearly falls into the category of a "web annoyance". When I close a 
page I want it gone - immediately. Passing workers around between pages 
sounds like something that would be good for advertising banner network 
trackers to avoid privacy settings.


I believe that babysitting developers (through feature elimination) is a 
bad idea. While WebWorkers aims to protect developers from common 
mistakes it will probalt force them into adopting worse hacks and 
workarounds. There is still a common conception of Javascript as a toy 
language, and in many ways that is true. However HTML5 calls itself a 
"WebApplications" language and we are indeed entering a time where many 
sophisticated desktop applications are being ported to Javascript. This 
raises a new issue, which is how do you port a threaded application to a 
language with no real threads or mutexes? With great difficulty I imagine.


One of the Mozilla JS developers has come out gunning against 
traditional threading in JS so it may be we never see an official thread 
object anytime soon ("over his dead body", in his own words). I realise 
WebWorkers tries to solve his concerns (mainly race conditions) by 
preventing  direct shared access to global variables but again this is 
something a threading library or good programming style should be able 
to solve. The workaround for webworker limitations is settimeout hacks 
so I wonder whether this is going to create horrible hybrid 
webworker+setimeout code that is generally unreadable.


Another issue with eliminating threads is that they are very desirable 
to developers. Because they are desirable it's likely that one of more 
browser vendors may go ahead and implement them anyway, essentially 
"embracing and extending" HTML5 and ECMAScript. If this happens then its 
likely a large number of popular multithreaded desktop applications will 
only be ported to those browsers. History has already shown us the 
problems this causes.


I am aware that the use cases and feature set of Web Workers has not 
been agreed on yet and there may be things I've overlooked. However I 
would much rather see an API that gives the developer more options and 
allows them to use or abuse them as required than a crippled API that 
pushes them into proprietary extensions, plugins and hacks to achieve 
something that every other major language already provides.


Shannon


Re: [whatwg] WebWorkers vs. Threads

2008-08-10 Thread Shannon

Jonas Sicking wrote:

Shannon wrote:
I've been following the WebWorkers discussion for some time trying to 
make sense of the problems it is trying to solve. I am starting to 
come to the conclusion that it provides little not already provided by:


setTimeout(mainThreadFunc,1)
setTimeout(workThreadFunc,2)
setTimeout(workThreadFunc,2)


Web workers provide two things over the above:

1. It makes it easier for the developer to implement heavy complex 
algorithms while not hanging the browser.

2. It allows web pages to take advantage of multicore CPUs.

details:
What you describe above is also known as cooperative multithreading. 
I.e. each "thread" has to manually stop itself regularly and give 
control to the other threads, and eventually they must do the same and 
give control back.


However this means that you have to deep inside your threads algorithm 
return out to the main event loop. This can be complicated if you have 
a deep callstack with a lot of local variables holding a lot of state.


Thus 1. Threading is easier to implement using workers since you don't 
have to escape back out to the main event loop.


Also, web workers allow the browser to spin up real OS threads and put 
off the worker execution there. So if you have a multicore CPU, which 
is becoming very common today, the work the page is doing can take 
advantage of more cores, thus producing better throughput.


I'm also unsure which mozilla developer has come out against the idea 
of web workers. I do know that we absolutely don't want the 
"traditional" threading APIs that include locks, mutexes, 
synchronization, shared memory etc. But that's not what the current 
spec has. It is a much much simpler "shared nothing" API which already 
has a basic implementation in recent nightlies.


/ Jonas



I assumed setTimeout used real threads but I'm not advocating its use 
anyway. I think Lua co-routines solve every issue you raise. I hope 
WebWorkers will follow this model because I know from experience they 
are very easy to use. The basic features are:


* each coroutine gets a real OS thread (if available).
* coroutines can access global variables; Lua handles the locking itself.
* yield and resume are available, but optional.
* coroutines are garbage-collected when complete.
* coroutines run a function, not a file. There is no need for a separate 
file download.


the syntax is:

function workerThreadFunction()
  ... do stuff ...
end

workerThread1 = coroutine.create( workerThreadFunction )

A Javascript implementation could also assist the programmer by 
automatically skipping threads that are waiting on IO or blocked waiting 
on user input since these actions usually represent a large faction of a 
web page workload.


Maybe I misunderstand the concept of "shared nothing" but I think 
denying access to global objects is unwise. Maybe in a low-level 
language like C that's a bad thing but high-level languages can 
serialise simultaneous access to variables to prevent crashes and 
deadlocks. Performance can be improved by explicitly declaring private 
thread variables using var.


If coroutines are adopted I hope they will be called "coroutines". 
WebWorkers sounds silly and doesn't really assist in understanding their 
purpose (you have to already know what they are to understand the analogy).


I think this proposal belongs in an ECMAScript discussion group but I 
only bring it up here due to my extreme dislike of the current 
WebWorkers proposal. I think the best way forward is to drop WebWorkers 
completely from HTML5 and let the ECMAScript group look at it for JS 2.0 
or 3.0.


Shannon


Re: [whatwg] WebWorkers vs. Threads

2008-08-12 Thread Shannon





--http://lua-users.org/wiki/CoroutinesTutorial

Is this description incorrect? It seems at odds with what you said
about Lua coroutines getting an OS thread (if one is available).


The description you quoted from lua-users.org is correct.  The primary 
implementation of Lua is 100% portable ISO C, it therefore does _not_ 
use OS threads for coroutines.  I think there may be 
extensions/modifications produced by other parties that provide that.


David Jones
Ravenbrook Limited

Sorry about the misinformation. I've been working on Lua programs with 
thousands of coroutines running and never noticed any artifacts or 
delays to indicate the execution wasn't truly parallel. What I don't 
understand is why it doesn't appear to block other coroutines on IO but 
since that's not relevant to creating threaded workers I'll leave that 
mystery for future research.


I thought Lua had an implementation of automated locking to use as a 
reference but since it doesn't I have nothing more to offer on the 
subject. Without knowing the internals of existing JS implementations I 
have no idea what would be involved to provide automated locking and 
whether it is impossible or just difficult.


Shannon








[whatwg] WebWorker questions

2008-08-12 Thread Shannon

A few questions and thoughts on the WebWorkers proposal:

If a WebWorker object is assigned to local variable inside a  complex 
script then it cannot be seen or stopped by the calling page. Should the 
specification offer document.workers or getAllWorkers() as a means to 
iterate over all workers regardless of where they were created?


Is it wise to give a web application more processing power than a single 
CPU core (or HT thread) can provide? What stops a web page hogging ALL 
cores (deliberately or not) and leaving no resources for the UI mouse or 
key actions required to close the page? (This is not a contrived 
example, I have seen both Internet Explorer on Win32 and Flash on Linux 
consume 100% CPU on several occasions). I know it's a "vendor issue" but 
should the spec at least recommend UAs leave the last CPU/core free for 
OS tasks?


Can anybody point me to an existing Javascript-based web service that 
needs more client processing power than a single P4 core?


Shouldn't an application that requires so much grunt really be written 
in Java or C as an applet, plug-in or standalone application?


If an application did require that much computation isn't it also likely 
to need a more efficient inter-"thread" messaging protocol than passing 
Unicode strings through MessagePorts? At the very least wouldn't it 
usually require the passing of binary data, complex objects or arrays 
between workers without the additional overhead of a string encode/decode?


Is the resistance to adding threads to Javascript an issue with the 
language itself, or a matter of current interpreters being non-threadsafe?


The draft spec says "protected" workers are allowed to live for a 
"user-agent-defined amount of time" after a page or browser is closed. 
I'm not really sure what possible value this could have since as an 
author we won't know whether the UA allows _any_ time and if so whether 
that time will be enough to complete our cleanup (given a vast 
discrepancy in operations-per-second across UAs and client PCs). If our 
cleanup can be arbitrarily cancelled then isn't it likely that we might 
actually leave the client or server in a worse state than if we hadn't 
tried at all? Won't this cause difficult-to-trace sporadic bugs caused 
by browser differences in what could be a rare event (a close during 
operation Y instead of during X)?


I just don't see any common cases where you'd _need_ multiple OS threads 
but still be willing to accept Javascripts' poor performance, Webworkers 
limited API, and MessagePorts' limited IO. The only things I can think 
of are new user annoyances (like delaying browser shutdown and hogging 
the CPU). Sure UA's might let us disable these things but then some 
pages won't work. The Working Draft 
<http://stuff.gsnedders.com/spec-gen/webworkers.html> lists a few 
examples, most of which appear to use non-blocking network IO and 
callbacks anyway. Other examples rely on the ability for workers to 
outlive the lifetime of the calling page (which is pretty contentious). 
The one remaining example is a contrived mathematical exercise. Is the 
scientific world really crying out for complex theorems to be solved in 
web browsers? What real-world use cases is WebWorkers supposed to solve?


I would like to see WebWorkers happen but as an author and a user I have 
serious concerns about using it in its current form. Is it really worth 
implementing or should more attention be paid to fixing non-thread-safe 
practices in the specification so future UAs can better manage threading 
internally (ie: video, IO, sockets, JS all running on seperate threads 
or even sets of threads per open tab/window)?


Shannon


Re: [whatwg] WebWorkers vs. Threads

2008-08-13 Thread Shannon

Jonas Sicking wrote:

Shannon wrote:
I've been following the WebWorkers discussion for some time trying to 
make sense of the problems it is trying to solve. I am starting to 
come to the conclusion that it provides little not already provided by:


setTimeout(mainThreadFunc,1)
setTimeout(workThreadFunc,2)
setTimeout(workThreadFunc,2)


Web workers provide two things over the above:

1. It makes it easier for the developer to implement heavy complex 
algorithms while not hanging the browser.


I suppose the limitations of the current approaches depends largely on 
what Javascript actions actually block a setTimeout or callback 
"thread". I keep being told WebWorkers solves this problem but I don't 
know any examples of code or functions that block the running of other 
callbacks. As with Lua I have always treated setTimeout as a means to 
execute code in parallel with the main "thread" and never had an issue 
with the callback or main loop not running or being delayed.


What you describe above is also known as cooperative multithreading. 
I.e. each "thread" has to manually stop itself regularly and give 
control to the other threads, and eventually they must do the same and 
give control back.


Actually I was referring to the browser forcefully interleaving the 
callback execution so they appear to run simultaneously. I was under the 
impression this is how they behave now. I don't see how Javascript 
callbacks can be cooperative since they have no yield statement or 
equivalent.


I'm also unsure which mozilla developer has come out against the idea 
of web workers. I do know that we absolutely don't want the 
"traditional" threading APIs that include locks, mutexes, 
synchronization, shared memory etc. But that's not what the current 
spec has. It is a much much simpler "shared nothing" API which already 
has a basic implementation in recent nightlies.


He wasn't against WebWorkers, he was, as you say,  against full 
threading (with all the mutexes and locks etc... exposed to the JS 
author). I can't find the reference site but it doesn't really matter 
except from the point of view that many people (including myself) aren't 
convinced a full pthread -like API is the way to go either. I just don't 
see why locking can't be transparently handled by the interpreter given 
that the language only interacts with true memory registers indirectly.


In other news...

Despite the feedback I've been given I find the examples of potential 
applications pretty unconvincing. Most involve creating workers to wait 
on or manage events like downloads or DB access. However Javascript has 
evolved a fairly complex event system that already appears to provide a 
reasonable simulation of parallelism (yes it isn't _true_ parallel 
processing but like Luas coroutines that isn't really apparent to the 
end user). In practice this means long-running actions like downloading 
and presumably DB interaction are already reasonably "parallel" to the 
main execution thread and/or any setTimeout "subprocesses". I would 
suggest it is even possible for future browsers to shift some of these 
activities to a "true" thread without any need for the authors explicit 
permission.


I would really prefer that WebWorkers were at a minimum a kind of 
syntactic sugar for custom callbacks (ie setTimeout but with arguments 
and a more appropriate name). With the exception of OS threads it seems 
to me that WebWorkers is at best syntactic sugar for existing operations 
but with zero DOM access and serious IO limitations. Also unlike 
existing options and normal threading conventions the WebWorker is 
forced to download its code and import as a string its arguments rather 
than have its code passed in as a function reference and its arguments 
passed by reference or true value. I know all the reasons why these 
limits exist; I'm just saying I think they render the whole proposal 
mostly useless; kind of like a car that only runs on carrots.


I have come up with one valid use case of my own for the current 
proposal: distributed computing like SETI or [EMAIL PROTECTED] in Javascript. 
This would allow you to participate ALL of your multi-core or SMP 
computer resources to the project(s) just by visiting their site. 
However on further consideration this has two major flaws:


1.) Being an interpreted language and having no direct access to MMX, 
GPUs and hardware RNGs makes Javascript a poor choice for intensive 
mathematical applications like these. I would expect a plugin or 
standalone version of these tools to have anywhere from a 10x to 10,000x 
improvement in performance depending on the calculations being performed 
and the hardware available. Yes there are a few more risks and a few 
more clicks but I wonder whether just having access to a few more 
threads will sway these gro

Re: [whatwg] WebWorkers vs. Threads

2008-08-13 Thread Shannon
irect access to these the only useful thing a 
worker can do is "computation" or more precisely string parsing and 
maths. I've never seen a video encoder, physics engine, artificial 
intelligence or gene modeller written in javascript and I don't really 
think I ever will. Apart from being slow there is the obvious 
correlation that anything that complex is:


a.) The realm of academics and science geeks using highly parallel 
specialist systems and languages, not web developers.
b.) Valuable enough to be commercial software - and therefore requiring 
protection against illicit copying (something Javascript can't provide).


Shannon




Re: [whatwg] WebWorkers vs. Threads

2008-08-14 Thread Shannon
rgetting the ability to do synchronous IO and the ability to
share workers between pages. Both of these benefits have been
explained in previous messages.
  


Once again someone mentions synchronous IO. I'm unfamiliar with any 
blocking Javascript IO operations except those explicitly created by the 
author (and I generally disagree with their logic for doing so). XHR is 
non-blocking. Even imageObject.src = 'pic.jpg' is non-blocking. I'm 
still waiting for somebody to tell me what Javascript operations 
actually block the UI except where the author has made a conscious 
decision to do so; ie:


longRunningFunction()
vs.
setTimeout(longRunningFunction,0)

As for sharing workers between pages, this is a property of 
MessagePorts, not WebWorkers. I could easily create a coroutine, thread 
or even a setTimeout loop to acheive the same thing provided I only send 
primitive data rather than object references (which is all MessagePorts 
allows anyway). WebWorkers makes this easier yes but so would a better 
proposal. This isn't a matter of WebWorkers vs. nothing. It's about 
whether WebWorkers limitations, no matter how well intentioned, make it 
useful at all to web developers.


This discussion has helped me understand your reasoning behind 
webworkers but truthfully I always knew the general 'why' of it. What 
I'm trying to find out is whether anybody has a genuine need for a 
Javascript compute node or whether authors would be better served by 
threads or coroutines that manage a shared DOM according to the rules of 
normal multitasking paradigms that have serve us since the first SMP 
machines were built.


I've trawled through many sources of information since starting this 
discussion and the overall impression I get is:


a.) Nobody has ever created a successful wait-free, lock-free system for 
x86 hardware.
b.) No one solution to this problem has ever been suitable to more than 
a subset of parallel applications.
c.) Despite its faults simple locking is currently the most common and 
successful paradigm for multi-core environments.


Which leaves me with:

a.) WebWorkers solves a specific class of problems (multiple 
compute/logic nodes for mathematical and scientific applications)
b.) Threads solves another set of problems (multiple action nodes on a 
large common dataset for general computing)
c.) WebWorkers and Threads may not be mutually exclusive. A thread could 
probably host or interact with a WebWorker and vice-versa.


Which leaves me thinking there is a good argument for having both 
paradigms at some point rather than one or the other. Any thoughts on this?



At this point I suspect we will have to agree to disagree. Perhaps
keep an eye on the spec as it continues to evolve. Perhaps it will
start to grow on you.
  


To do that it would have to at minimum allow the passing of Javascript 
primitives. Booleans, Integers, Floats, Strings, Nulls and Arrays should 
be passed by value (removing any custom properties they might have been 
given). Marshalling everything through Unicode strings is a terrible idea.


Shannon



Re: [whatwg] WebWorkers vs. Threads

2008-08-14 Thread Shannon

Shannon wrote:
Think about the kind of applications that use parallel "compute nodes" 
and you'll realise that 98% don't exist outside of academia and 
laboratories due to synchronisation, network latencies and other 
issues that implementing Javascript workers won't solve. More 
importantly though there is a lack of general computing software that 
requires this model.


On second thoughts I withdraw these claims. I don't have the statistics 
to know one way or the other why "portable threads" are more prevalent 
than "share nothing" ones. There may be many reasons but latencies 
probably isn't one of them. It could just be fashion or convenience.


Shannon




[whatwg] Client-side includes proposal

2008-08-17 Thread Shannon
The discussion on seamless iframes reminded me of something I've felt 
was missing from HTML - an equivalent client functionality to 
server-side includes as provided by PHP, Coldfusion and SSI. In 
server-side includes the document generated from parts appears as a 
single entity rather than nested frames. In other words the source code 
seen by the UA is indistiguishable from a non-frames HTML page in every way.


iframes are good for some things but they can be really messy when 
you're trying to build a single seamless page with shared styles and 
scripts from multiple files. It makes code reuse a real pain without 
relying on a server-side dynamic language. The seamless iframes proposal 
doesn't really address this well because you'll have more than one HTML 
and BODY element causing strange behaviour or complex exceptions with 
seamless CSS.


The other issue with iframes is that for many page snippets the concept 
of a title, meta tags and other headers don't make sense or simply 
repeat what was in the main document. More often than not the  
section is meaningless yet must still be included for the frame to be 
"well-formed" or indexed by spiders.


The proposal would work like this:

--- Master Document ---

   
  Include Example
  
  
   
   
 
 http://www.pagelets.com/foo.ihtml";>
 
   


--- Header.html ---

   Header



With this proposal seamless CSS would work perfectly because child 
selectors won't see an intervening  element between sections.


Includes should allow any html segments except the initial  and 
 (for reasons explained below) and should allow start and end tags 
to be split across includes. Only tags themselves may not contain an 
include (eg, >). Many 
server-side includes allow this but it breaks the syntax of HTML/XML.


Includes must respect their own HTTP headers but inherit all other 
properties, styles and scripts from the surrounding page. If an include 
is not set to expire immediately the browser should reuse it from 
memory, otherwise it should retreive it once for each include. Each 
behaviour has its own merits depending on the application.


The standard would recommend (but not require) includes to use an .ihtml 
extension. This will make it easier for authors, UAs and logging systems 
to distinguish partial and complete pages (ie, not count includes 
towards page views in a stats package).


UAs or UA extensions like the Mozilla-based "Web Developer" should allow 
the user to view the actual source and the "final" source (with all 
includes substituted).


HTTP 1.1 pipelining should remove any performance concerns that includes 
would have over traditional SSI since the retrieval process only 
requires the sending of a few more bytes of request and response 
headers. In some ways it is actually better because UAs and proxies can 
cache the static includes and only fetch the dynamic parts.


The only real issue with this proposal is security for untrusted content 
like myspace profiles. Traditional sanitisers would be unfamiliar with 
 and may allow it through, providing a backdoor for malicious 
code. For this reason it is necessary that includes be opt-in. The 
simplest mechanism is to use a meta tag in the head of the master document:




I would consider any content system that allowed untrusted users to 
write their own head tags to be incurable insecure; however this 
requirement should ensure that the majority do not suddenly experience a 
wave of new exploits in HTML5 browsers.


Shannon


Re: [whatwg] Client-side includes proposal

2008-08-18 Thread Shannon

Ian Hickson wrote:

On Mon, 18 Aug 2008, Shannon wrote:
  
The discussion on seamless iframes reminded me of something I've felt 
was missing from HTML - an equivalent client functionality to 
server-side includes as provided by PHP, Coldfusion and SSI.



What advantage does this have over server-side includes?

The  idea has the advantage of not blocking 
rendering, which a client-side parsing-level include would. I don't really 
see what the advantage of a client-side parsing-level include would be.
  
Cost. SSI of any description generally puts you on a "business" hosting 
plan with a cost increase of up to 10x. Client-side includes only 
require static page serving which can also be beneficial where the 
server is a device (like a router interface).


Availability. As far as I can tell SSI is only available on  Apache and 
later versions of IIS. This may be a market majority but when you 
consider the number of devices and "home servers" coming onto the market 
with basic web interfaces the actual availability of SSI may be lower 
than you'd expect.


Security. Availability is reduced even further by ISP and organisations 
that flat-out refuse to support SSI due to security and support concerns.


Reuse. SSI follows no agreed set of rules and therefore may require code 
changes when changing hosting provider. Some systems provide extensions 
that authors may assume is part of a standard, but aren't. We have an 
opportunity to define a simpler and more reliable interface that is 
independant of any server configuration.


Speed. Concerns about speed are generally only valid for the first page 
on the first visit to a site. Subsequent pages can be much faster than 
SSI and even static html since common banners and footers can be cached 
seperately - requiring only a unique content download. This is less 
trivial than it sounds since global drop-down menus, ad frames, tracking 
code, overlays and embedded JS and CSS often account for a vast majority 
of the source code.


Flexibility. It's hard to tell because the seamless proposal is vague 
but from what I can tell there are a lot of things a "seamless" iframe 
is not seamless about. For instance can absolutely positioned items be 
positioned relative to the enclosing document? Do child and adjacent 
selectors work across the iframe boundary? Will IE give up its behaviour 
of placing iframes above the document regardless of z-index? Includes 
don't have any of these issues because they are treated as part of the 
document, not as a special case.


Even with these advantages I do not believe it is an either/or case. 
"seamless" iframes serve a purpose too and I do not want to see them 
dropped from the spec. I would however like to see more clarity in the 
spec as to how they interact with scripts and styles (especially 
adjacency selectors)  in the master document and neighbouring seamless 
frames.



HTTP 1.1 pipelining should remove any performance concerns that includes
would have over traditional SSI since the retrieval process only 
requires the sending of a few more bytes of request and response 
headers.



A TCP round-trip is very expensive. A client-side parsing-level include 
would mean that the parser would have to stop while a complete round-trip 
is performed. There's really no way to get around that short of making it 
a higher-level construct like .


  
There is actually an easy solution for this, though it is less flexible 
than my original proposal. The solution is to require each include to be 
balanced (equal number of open and close tags) so the surrounding block 
is guaranteed to be a single node. Anything left open is forcefully 
closed (as when reaching  with open blocks). In other words:



   


Since we know "content" is a closed block we can draw in a transparent 
placeholder and continue rendering the outer document as we do with img, 
video, iframes and embed. Once the true dimensions are known the 
renderer can repaint as it does with table cells and other "auto" 
sizing. This will often improve the readability of documents on slower 
connections since the top third of source code is usually concerned with 
banners, menus, search-bars and other cruft not directly relevant to the 
content the user wants to view and this is exactly the stuff you would 
want to put in an include to begin with. If it renders last then all the 
better.


Shannon


Re: [whatwg] Client-side includes proposal

2008-08-18 Thread Shannon

Kristof Zelechovski wrote:

Client-side includes make a document transformation, something which
logically belongs XSLT rather than HTML.  And what would the workaround for
legacy browsers be?
Chris
  


You make good points but last I checked general browser and editor 
support for XSLT was abysmal. Everyone is saying its on their "roadmaps" 
though so maybe it will one day be reliable enough to use.


You could go:


   Banner


But this just seems wasteful, pointless and open to abuse. I think a 
better workaround is that people with legacy browsers download each 
include file seperately and paste them together in DOS or AmigaOS or 
whatever system it is that keeps them from installing a modern browser.


Of course XSLT has the same legacy issues as do many parts of HTML5. I 
know the reasoning but at some point the web will have to leave 
unmaintained software behind or face the same grim reality OpenGL is 
facing now (can't move forward because a minority want legacy support 
for 10 year old CAD applications, can't go back because competing 
protocols are in front on features).


I'd like to see the option made available and its use determined by the 
market as we have always done. If a developer wants to piss-off users by 
writing a Flash or Silverlight-only website then the ONLY thing we can 
do is provide an equivalent feature and hope that it does less harm (by 
virtue of being a truly open standard). The average web developer's 
mentally is very different from the majority of this list, they won't 
compromise to do the "right thing". If I can do client-side includes in 
Flash and Silverlight (and I can) then why should HTML be left behind?


Anyway, I don't mean for an open discussion on this as I'm sure it's 
been debated endlessly. I'm just stating my case for going ahead with 
this feature.


Shannon


Re: [whatwg] Client-side includes proposal

2008-08-18 Thread Shannon
asy solution for this, though it is less flexible 
than my original proposal. The solution is to require each include to be 
balanced (equal number of open and close tags) so the surrounding block 
is guaranteed to be a single node. Anything left open is forcefully 
closed (as when reaching  with open blocks). In other words:



   




What do you do when the CSIed page includes script that manipulates 
content after the include? Now you have a race condition. This is just as 
bad as blocking, if not worse, since it's now unpredictable.
  


You do the same thing you always have when external JS or inter-page 
requests raise the same issue. Defer JS until the DOM is in a sane state 
(onload). In my experience trying to access an object below your script 
in the source is a terrible idea and nearly always results in null 
object references that crash your page scripts.


Anyway in conclusion I don't understand what CSIs give us that is actually 
worth the massive amounts of effort they require. Just generate your pages 
server-side or upload them to your server pre-generated.


  
As a developer I tell you this is not really a good option, and I 
disagree with your claim of "massive effort". It is a fairly 
straightforward feature as they go. Embedded SQL is a massive effort, 
WebWorkers is massive effort, client-side includes is quite trivial, 
relatively speaking. Certainly worth further investigation in light of 
its obvious benefits.


Shannon


Re: [whatwg] Client-side includes proposal

2008-08-19 Thread Shannon
ll the end then I really don't care; and if the 
content rendering is somehow blocked by the wait for banner code then 
I'd consider reordering my includes or cleaning up cross-include scripts 
until it doesn't.


You just don't have the statistics and research to define blocking 
delays in real user experience terms. To be honest your justification 
while technically accurate seems fudged in respect to actual impact. I 
don't believe the problem is anywhere near as serious as you make it out 
to be. In a perfect world maybe every web page would be a single 
request, with everything loaded and rendered in the order we need it; 
but then that's basically Flash isn't it?


At the very least the choice to use CSI and the impact of its 
side-effects is something a designer has a lot of control over. A lot 
more control than SSI, proprietary templates and source preprocessors 
with a lot less complexity (for the designer). All of these alternatives 
have their own issues - often they are showstoppers. CSI solves a 
specific class of problem that is so common practically every 
programming language and template system provides it in one way or 
another. HTML, even with it's focus on hyperlinking content from 
disparate sources, is the odd one out.


If I have to pursue this through XSLT then I will but this just feels 
like a HTML shortcoming to me, since as you correctly point it, there 
are script and style considerations involved that may be specific to 
HTML as a rendering protocol. I've offered many good arguments for this 
proposal. The rest depends on those arguments being weighed across your 
claims of alternatives and implementation problems. You should have a 
pretty good idea how I view those arguments by now.


Shannon


Re: [whatwg] number-related feedback

2008-08-21 Thread Shannon
I was going to suggest the spec define unsigned integers as a value
between 1 and MAX_INT inclusive when I realised the spec doesn't define
the range of integer values at all. Is this deliberate?

Either way I would recommend making a decision on minimum and maximum
integer values an using them consistently. If not I can imagine the
rapid adoption of 64-bit systems will cause unexpected errors when the
same code is run on older 32-bit systems. There are valid arguments for
letting each system use its native integer but if this is the case then
perhaps the spec should require MIN_INT and MAX_INT be made available as
constants.

Also the spec interchanges the terms "non-negative integer", "positive
integer" and "unsigned integer". I suggest defining one of these clearly
and then using it everywhere.

This is a very minor point but is it necessary to say "valid integer".
Given that there appears to be no defined min/max range when is
something both an integer and at the same time invalid? Isn't an invalid
integer a string?

Finally I wasn't aware Javascript made a distinction between signed and
unsigned integers. Is the distinction really necessary? Can we just make
everything signed integers and consistently call the full range
"integer" and the positive range "integer greater than 0"?

Shannon



Re: [whatwg] Ghosts from the past and the semantic Web

2008-08-27 Thread Shannon

Eduard Pascual wrote:

I would like to encourage this community to learn from what it has
already been done in the past, check what worked, and see why it
worked; then apply it to the problem at hand. If for presentation CSS
worked (and I really think it did; if somebody disagrees I invite you
to share your opinion), then let's see what made it work:
First of all, and essentially, CSS was independent to HTML, although
they were to be used together. I hope it is already clear by now that
we need to deal with semantics from outside of HTML. RDF is an example
of a mechanism that is independent to HTML.
Next, CSS had a simple syntax, despite the size of its vocabulary:
once you understand the "selector { property: value; }", you
understand most of CSS syntax. The RDF's XML format is quite verbose
and is not a good example of a simple syntax. But RDFa comes to the
rescue, providing an approach to simplify the syntax.
Last, but not least, CSS was usable with HTML because there where were
hooks between the two: the selector's semantics are based in HTML's
structure (and, by extension, any other markup language). CSS was,
indeed, intended to represent the presentation of markup documents.
RDFa provides some hook; but there is a gotcha: RDFa is not intended
to represent the semantics of a web document; but to embeed those
semantics within the document. RDF just represents (semantic)
relationships between concepts; and RDFa puts that representation
inside the document.

  

...

In summary, I think RDFa might work, and it wouldn't be a too bad
solution, but I don't think it is the best approach either.

  


I think you were on to something with the CSS-like approach. Ian has 
stated earlier that class should be considered a generic categorisation 
element rather than only a CSS hook. If so then this would also let us 
hook metadata to classes. ie:



.author {
   species: human;
   produces: books;
   consumes: coffee;
}
.author .john_smith {
   name: John Smith;
   dob: 2000-01-01;
}



.author {
   color: purple;
}


Authors

   John Smith
   Jane Simmons



There is no reason why a range of XML metadata formats like text/rdf 
couldn't be supported provided they are not used inline (like the 
example above) but imported. ie:



Since this approach requires only one new tag  and the 
metadata is separate from the structural elements this should resolve 
some concerns. In addition since this proposal does not limit HTML to 
one metadata language (though a default could be decided) so there is 
more flexibility in the future to support currently unknown formats.


From the designers point of view there is less typing, since a single 
class attribute can hook both style and meaning to the same structure 
and the meanings can be reused. Using a public namespace would be a 
simple matter of:
href="http://www.standards.org/metadata/media/movies.rdf"; type="text/rdf">


If RDF or RDFa are considered too heavy to be a default language (and 
they suffer from being impossible to embed inline or in  
blocks) then the "cascading metadata" approach above might be useful. 
Since it can reuse existing CSS parsers, editors and behaviour 
(selectors, cascading model) it should have a lower implementation 
burden than XML+Namespaces.


Shannon


Re: [whatwg] Ghosts from the past and the semantic Web

2008-08-27 Thread Shannon

Ben Adida wrote:

Shannon wrote:
  

I think you were on to something with the CSS-like approach. Ian has
stated earlier that class should be considered a generic categorisation
element rather than only a CSS hook.



Three things:

1) specifying the semantics only in a separate file rules out a very
important use case: the ability to simply paste a chunk of HTML into
your site and have it carry with it all metadata. Think MySpace, Google
widgets, Creative Commons, This is crucial to the design of
HTML-based metadata.
  


Who said anything about it only being in a separate file? My original 
example was local to a snippet in the same way as 

Re: [whatwg] Ghosts from the past and the semantic Web

2008-08-29 Thread Shannon

Ben Adida wrote:

Shannon wrote:
  

http://some.official.vocabulary/1.1/metadata.cm";>



Not workable, as this in the HEAD of the document and oftentimes we
simply can't expect users to be able to modify the head of the document
(widgets, blog engines where you can only modify the content of a blog
entry, etc...)
  


I thought I made this clear. There are at least 4 methods or applying 
CSS. Each has a purpose. Use the one the works for your situation. Two 
of these methods do not require a seperate document or code in . 
Discussing the shortfalls of the others for your particular use is 
pointless.




I don't think what you wrote above is widely used or understood. In
fact, I think it's not used at all, whereas RDFa is actually being used
today.
  
I'm referring of course to the behaviour and syntax, not the elements or 
properties, assuming that behaviour were basically the same as CSS.



Also, one big hole: how do you make a statement about another item? How
do you describe multiple items on a page? How do you relate two items on
a page? Say, the Craigslist example, with multiple listings?
  


The same way RDFa does except it's done within a metadata block or 
attribute.



eRDF tried to squeeze everything into @class, and it isn't able to be as
flexible as RDFa (and thus as we need) in this respect. It has a lot of
trouble expressing data about multiple items.
  
eRDF is a hack that abuses the purpose of @class. Class was never meant 
to carry meaning beyond group membership.



What's surprising to me is this attempt to shoe-horn so much unexpected
stuff into @class. What is so sacred about HTML4 that *this* issue can't
be helped by a bit of rethinking? Certainly, everything else seems to be
up for rethinking in HTML5.

  
Class is not being shoehorned in this proposal. Nothing is being put 
into class except class names.


I'm not sure I'd call RDFa "rethinking". It seems a lot like all the 
other attributes stuffed into HTML over the years. When the rethinking 
on HTML attributes came it was realised that many of those attributes 
were underused, inflexible or just better off somewhere else. CSS was 
the rethinking of @width, @background, @color etc and a host of 
non-style attributes have been removed from HTML5. You're already asking 
for at least 4 new attributes and while you've made a case for why they 
could be desirable inline you haven't made a case for why they *have* to 
be inline or why they *have* to be attributes. This proposal provides 
the author the choice of inline, inline blocks, or external files. Each 
is useful for differing circumstances.


Your proposal requests 4 or 5 new attributes. This proposal should 
support any number of current or future RDF metadata attributes without 
changing HTML. Future metadata formats and extensions can be developed 
in a metadata working group without being depended on HTML support.


Your proposal depends on RDF. This proposal is mostly format neutral. 
New metadata types can be supported in the future using 
type="metadata/type" on link and metadata elements.


I think you see the problem to be solved as "RDF-in-HTML". I would 
prefer the problem defined as "Metadata-in-HTML".

By the way, we considered using @class instead of @typeof, but
we met with serious opposition from folks who didn't want us to mess
with the existing uses of @class.

  
I haven't seen these discussions but were these objections because the 
opponents did not understand the concept of multiple classes; or were 
they because you were trying to use URIs or namespace seperators 
directly in class names eg, class="dc-title"? What precisely were the 
objections based on? Was the discussion about a seperate metadata 
definition like we are talking about here or was the proposal actually 
trying to cram all your RDF namespace, about and typeof data directly 
into the class attribute (like eRDF)?


From what you've said here and elsewhere it really sounds like you were 
really discussing adding meaning to the value of @class. This is not 
what this proposal is about and it isn't what class is for either. For 
example this is consider bad:




this is better:



Class is not meant to express an outcome, nor any meaning beyond that of 
an identifier. However as an identifier you can use it to associate more 
complex declarations of style, script and metadata. Unless I'm mistaken 
about your intent it seems these folks may have objected to making 
universal class names that have meaning in their own right (reserved 
names, class namespaces)?



Except that means you have to coordinate multiple documents, and that
really doesn't follow the goal of having HTML be the carrier of all the
information. That's an important requirement for Creative Commons and
others.

  
You would not have to use multiple docum

Re: [whatwg] Generic Metadata Mechanisms (RDFa feedback summary wiki page)

2008-09-10 Thread Shannon
I would like to restore the pros and cons. Although they are not as 
consise as you would like there was still a considerable amount of time 
put into them and they do reflect the arguments put forward on both 
sides of the RDF discussions. You are asking for more detail and then 
removing the details that existed.


I understand your desire for more detail but please understand that in 
many cases the why is really self-explanatory. We are web professionals 
and academics, not children. In some, probably most, cases the Pros and 
Cons answer the why question in their own right.


I don't mind fleshing out the details and arguments behind more complex 
subjects but if you're going to block-delete contributions I don't see 
why I should bother. What really upset me is that you yourself set the 
precedent for simple Pro/Con bullet points under each "requirement" with 
your initial template. You can call it placeholder text if you like but 
I certainly got the impression that you were looking for concise points, 
not essays.


I don't want to undo, since you've added other clarifications since but 
I would like reassurance that if I copy-paste the pros and cons back in 
they'll stay this time. If there are any particular points you object 
too then let me know.



Ian Hickson wrote:


2.8 Choice of format

This section doesn't describe a requirement.

  
Are you sure? The RDFa folk have been insisting it's RDFa or nothing for 
some time now. On the other hand it has drawbacks which another format 
may solve. This is similar to scripts coming in different formats 
(Javascript, VBScript) and styles (CSS, XLST). It isn't a "requirement" 
per-se, more like a desirable outcome. Perhaps we need a new section for 
non-essential needs but then that's just begging for an edit war as 
proponents of different solutions promote or demote other peoples 
requirements. Perhaps you should be more precise about what makes 
something "required" because by strict definition the only actual 
requirements for "generic metadata" in HTML5 should be "it conveys 
metadata" and "it works in HTML5".


Shannon


Re: [whatwg] Generic Metadata Mechanisms (RDFa feedback summary wiki page)

2008-09-10 Thread Shannon

Ian Hickson wrote:

On Thu, 11 Sep 2008, Shannon wrote:
  

I would like to restore the pros and cons.



I just merged the non-obvious ones into the text and removed the obvious 
ones.
Merging pros and cons into the opening paragraph is a poor design 
choice. It makes it more difficult for contributers to flesh out each 
point without breaking paragraph consistency. The leading text should 
simply be a definition of the requirement (preferably free of bias) and 
the problems it attempts to solve. The pros and cons then debate the 
"why" (ie: the  Pros) and the drawbacks and feasibility of it (the 
cons). Mixing the two promotes bias in the description.


 (Saying "Con: Proposal may be more complex" isn't helpful.) I don't 
think I removed any non-trivial ones, which ones did you have in mind? My 
apologies if I did remove anything non-trivial.


  
Since complexity is often used in this group as an argument against new 
proposals it is entirely relevant to list it as an argument against a 
requirement. You can't just assume the argument is implied since not all 
requirements are likely to complicate an implementation.


Furthermore you've already stated your lack of time to follow the 
discussion to date so you are last person to decide what constitutes a 
trivial or important claim. If I thought something was irrelevant then I 
would not have put it in. Your edit boils down to an opinion on your 
part that borders on insulting (ie, prior contributors had nothing of 
value to say and that everything said was obvious). Even a glance at the 
original page 
<http://wiki.whatwg.org/index.php?title=Generic_Metadata_Mechanisms&oldid=3267> 
reveals this is far from true. I think the burden is actually on you to 
explain exactly which points you find "trivial".
  

Ian Hickson wrote:


2.8 Choice of format

This section doesn't describe a requirement.
  

Are you sure?



The section said "Choice of format: There are already several metadata 
formats. In the future there may be more", and that's not a requirement. A 
requirement is something that a proposal can be evaluated against. This 
isn't something that can be evaluated against, it's just an axis.


It's like "choice of height" as opposed to "must be at least 6ft tall" 
when discussing requirements for a shed.


  
So improve the summary, don't remove the section. Providing a choice of 
format is a technical decision with pros and cons. Your analogy is garbage.
  
Perhaps you should be more precise about what makes something "required" 
because by strict definition the only actual requirements for "generic 
metadata" in HTML5 should be "it conveys metadata" and "it works in 
HTML5".



HTML5 already has something that satisfies those requirements (the class 
attribute) so clearly (assuming HTML5 as written today isn't enough) there 
are more requirements than that, at least from the RDFa community.


  


You didn't answer the question. Assuming that there are requirements, 
what makes something a "requirement". By your own logic everything in 
the requirements section are actually "proposed features". Change the 
section title then.


Please, this discussion isn't helpful. Just put the pros and cons back, 
remove any you think are both useless *and* incapable of being expanded 
upon. Where detail is lacking just say so but leave the argument in 
place as a placeholder to do so. The entire intent to my contributions 
was not to write a thesis / research paper on the issues but to present 
the arguments put forth so far on the list (or otherwise likely to be 
relevant) so that each can be considered and fleshed out. I included 
pros and cons presented from all parties who have contributed so far. I 
agree more detail is required but mass deleting the existing content is 
not the way forward.



Shannon


Re: [whatwg] WebSocket support in HTML5

2008-09-21 Thread Shannon



Richard's Hotmail wrote:
 
My particular beef is with the intended WebSocket support, and 
specifically the restrictive nature of its implementation. I 
respectfully, yet forcefully, suggest that the intended implementation 
is complete crap and you'd do better to look at existing Socket 
support from SUN Java, Adobe Flex, and Microsoft Silverlight before 
engraving anything into stone! What we need (and is a really great 
idea) is native HTML/JavaScript support for Sockets - What we don't 
need is someone re-inventing sockets 'cos they think they can do it 
better.
 
Anyway I find it difficult to not be inflammatory so I'll stop now, 
but please look to the substance of my complaint (and the original 
post in comp.lang.JavaScript attached below) and at least question why 
it is that you are putting all these protocol restriction on binary 
socket support.
It's hard to determine the substance of your complaint. It appears you 
don't really understand the Java, Flex or Silverlight implementations. 
They are all quite restrictive, just in different ways:


* Java raises a security exception unless the user authorises the socket 
using an ugly and confusing popup security dialog
* Flex and Silverlight requires the remote server or device also run a 
webserver (to serve crossdomain.xml). Flex supports connections ONLY to 
port numbers higher than 1024. The crossdomain files for each platform 
have different filenames and appear to already be partly incompatible 
between the two companies, hardly a "standard".


Both Silverlight and Flash/Flex are fundamentally flawed since they run 
on the assumption that a file hosted on port 80 is an authorative 
security policy for a whole server. As someone who works in an ISP I 
assure you this is an incorrect assumuption. Many ISPs run additional 
services on their webserver, such as databases and email, to save rack 
hosting costs or for simplicity or security reasons. I would not want 
one of our virtual hosting customers authorising web visitors access to 
those services. It is also fundamentally flawed to assume services on 
ports greater than 1024 are automatically "safe".


These companies chose convienience over security, which quite frankly is 
why their software is so frequently exploited. However that's between 
them and their customers, this group deals with standards that must be 
acceptable to the web community at large.


The current approach the HTML spec is taking is that that policy files 
are essentially untrustworthy so the service itself must arbitrate 
access with a handshake. Most of the details of this handshake are 
hidden from the Javascript author so your concerns about complexity seem 
unjustified. If you are worried about the complexity of implementing the 
server end of the service I can't see why, it's about 3-6 lines of 
output and some reasonably straight-forward text parsing. It could 
easily be done with a wrapper for existing services.


Other than that it behaves as an asynchronous binary TCP socket. What 
exactly are you concerned about?


Shannon



Re: [whatwg] WebSocket websocket-origin

2008-09-29 Thread Shannon

Anne van Kesteren wrote:
What is the reason for doing literal comparison on the 
websocket-origin and websocket-location HTTP headers? Access Control 
for Cross-Site Requests is currently following this design for 
access-control-allow-origin but sicking is complaining about so maybe 
it should be URL-without- comparison instead. (E.g., then 
http://example.org and http://example.org:80 would be equivalent.)



I think the temptation to standardise features like access control 
defeats the point of websockets. Since things like access control and 
sessions can be readily implemented via CGI interfaces it seems implied 
that the whole point of websockets is to provide "lightweight" services. 
If the service actually needs something like this then the author can 
perform the check post-handshake using any method they feel like. I 
don't really feel strongly one way or the other about this particular 
header but I'm concerned about the slippery-slope of complicating the 
HTTP handshake to the point where you might as well be using CGI. Maybe 
the standard should simply recommend sending the header but make no 
requirement about how it is parsed. That way the service itself can 
decide whether the check is even necessary and if so whether it should 
be strict or loose or regex-based without the client automatically 
hanging up the connection.


Shannon


[whatwg] Simplified WebSockets

2008-09-30 Thread Shannon
It occurred to me the other day when musing on WebSockets that the 
handshake is more complicated than required to achieve its purpose and 
still allows potential exploits. I'm going to assume for now the purpose 
of the handshake is to:


* Prevent unsafe communication with a non-websocket service.
* Provide just enough HTTP compatibility to allow proxying and virtual 
hosting.


I think the case has been successfully put that DDOS or command 
injection are possible using IMG tags or existing javascript methods - 
however the counter-argument has been made that the presence of legacy 
issues is not an argument for creating new ones. So with that in mind we 
should implement WebSockets as robustly as we can.


Since we don't at first know what the service is we really need to 
assume that:


* Long strings or certain characters may crash the service.
* The service may not be line orientated.
* The service may use binary data for communications, rather than text.
* Characters outside the ASCII printable range may have special meaning 
(ie, 'bell' or control characters).
* No string is safe, since the service may use string commands and 
non-whitespace separators.


For the sake of argument we'll assume the existence of a service that 
accepts commands as follows (we'll also assume the service ignores bad 
commands and continues processing):


AUTHENTICATE(user,password);GRANT(user,ALL);DELETE(/some/record);LOGOUT;

To feed this command set to the service via WebSockets we use:

var ws = new 
WebSocket("http://server:1024/?;AUTHENTICATE(user,password);GRANT(user,ALL);DELETE(/some/record);LOGOUT;")


I have already verified that none of these characters require escaping 
in URLs. The current proposal is fairly strict about allowed URIs but in 
my opinion it is not strict enough. We really need to verify we are 
talking to a WebSocket service before we start sending anything under 
the control of a malicious author.


Now given the huge variety of non-HTTP sub-systems we'll be talking to I 
don't think a full URL or path is actually a useful part of the 
handshake. What does path mean to a mail server for instance?



Here is my proposal:

C = client
S = service

# First we talk to our proxy, if configured. We know we're talking to a 
proxy because it's set on the client.


C> CONNECT server.example.com:1024 HTTP/1.1
C> Host: server.example.com:1024
C> Proxy-Connection: Keep-Alive
C> Upgrade: WebSocket/1.0

# Without a proxy we send

C> HEAD server.example.com:1024 HTTP/1.1
C> Host: server.example.com:1024
C> Connection: Keep-Alive
C> Upgrade: WebSocket/1.0

# If all goes well the service will respond with:

S> HTTP/1.1 200 OK
S> Upgrade: WebSocket/1.0
or
S> Some other HTTP response (but no Upgrade header)
or
S> Other non-HTTP response
or
No response.

# If we get a 200 response with Upgrade: WebSocket we *know* we have a 
WebSocket. Any other response and the client can throw a 'Connection 
failed' or 'Timeout' exception.


The client and server can now exchange any authentication tokens, access 
conditions, cookies, etc according to service requirements. eg:


ws.Send( 'referrer=' + window.location + '\r\n' );
ws.Send( 'channel=' + 'customers' + '\r\n' );
ws.Send( CookiesToServerSyntax() );

The key advantages of this method are:

* Simplicity (less handshaking, less parsing, fewer requirements)
* Security (No page author control over initial handshake beyond the 
server name or IP. Removes the risk of command injection via URI.)
* Compatibility (HTTP compatible. Proxy and Virtual Hosting compatible. 
Allows a CGI script to emulate a WebSocket)


I'm not saying the current proposal doesn't provide some of these 
things, just that I believe this proposal does it better.


Shannon



Re: [whatwg] Simplified WebSockets

2008-10-12 Thread Shannon
I have written an implementation of a websocket client and server for 
testing my proposed protocol. Testing in the real world has provided me 
some good information on what works and what doesn't, particularly in 
regards to relaying through public anonymous proxies. Those wishing to 
experiment with variations of the protocol or with particular services 
may find these scripts useful.


http://www.warriorhut.org/whatwg/

The scripts connect to each other with a lightweight HTTP handshake then 
asynchronously send a user-defined amount of data. The purpose is to see 
how common HTTP proxies handle asynchronous connections (with client and 
server sending simultaneously). These scripts are not an implementation 
of the current draft spec, but an alternative proposal I raised earlier 
due to what I see as major design flaws in the draft spec.


It should be noted that the spec outlined does not use the onmessage 
interface proposed in the WHATWG draft spec, but rather a more standard 
read() and write() as implemented by most other languages (ie, 
websocket.read(512) returns up to 512 bytes of buffered data from the 
socket). This will make porting traditional client code from other 
languages much easier.


The underlying design principles behind this proposal are:

* Don't send author defined data (except for host) to any service that 
has not yet identified itself as a websocket.
* Do not frame, encode or restrict any data sent after the websocket 
upgrade. It should be possible at this point for any type of server to 
take over the connection transparently.
* Do not require any headers not absolutely essential to creating a 
connection. Let the client and server handle cookies, origin or 
authentication as the author chooses.
* Do not hardcode port numbers, this is not really as secure or useful 
as the spec authors seem to believe (port 81 is quite commonly used as a 
webmail port for example).


I have not addressed TLS as this is difficult to program. In theory 
though it should only require Upgrade: TLS in place of, or prior to, the 
Upgrade: Websocket header.


Shannon


[whatwg] WebSocket and proxies

2008-10-13 Thread Shannon
In the process of testing my WebSocket proposal I discovered the CONNECT 
method has a major restriction. Most proxies disable CONNECT to anything 
but port 443.


The following is from "Squid and the Blowfish":
--
It is very important that you stop CONNECT type requests to non-SSL 
ports. The CONNECT method allows data transfer in any direction at any 
time, regardless of the transport protocol used. As a consequence, a 
malicious user could telnet(1) to a (very) badly configured proxy, enter 
something like:

... snip example ...
and end up connected to the remote server, as if the connection was 
originated by the proxy.

---

I verified that Squid and all public proxies I tried disable CONNECT by 
default to non-SSL ports. It's unlikely many internet hosts will have 
443 available for WebSockets if they also run a webserver. It could be 
done with virtual IPs or dedicated hosts but this imposes complex 
requirements and costs over alternatives like CGI.


The availability and capabilities of the OPTIONS and GET protocols also 
varied from proxy to proxy. The IETF draft related to TLS 
(http://tools.ietf.org/html/draft-ietf-tls-http-upgrade-05) has this to say:


---
3.2 Mandatory Upgrade

  If an unsecured response would be unacceptable, a client MUST send
  an OPTIONS request first to complete the switch to TLS/1.0 (if
  possible).

 OPTIONS * HTTP/1.1
 Host: example.bank.com
 Upgrade: TLS/1.0
 Connection: Upgrade
---

So according to this draft spec OPTIONS is the only way to do a 
*mandatory* upgrade of our connection. Once again this failed in testing


---
=> OPTIONS * HTTP/1.1
=> Proxy-Connection: keep-alive
=> Connection: Upgrade
=> Upgrade: WebSocket/1.0
=> Host: warriorhut.org:8000
=>
<= HTTP/1.0 400 Bad Request
<= Server: squid/3.0.STABLE8


Other proxies gave different errors or simply returned nothing. The 
problem may be related to the Upgrade and Connection headers rather than 
OPTIONS, since I had similar issues using Connection: Upgrade with GET.


I had the most success using GET without a Connection: Upgrade header. 
It seems that the proxy thinks the header is directed at it so it does 
not pass it on to the remote host. In many cases it will abort the 
connection. Using the Upgrade: header without Connection allows the 
Upgrade header through to the actual websocket service.


It seems to me that whatever we try in many cases the connection will be 
silently dropped by the proxy and the reasons will be unclear due to the 
lack of error handling. There seems to be a wide variation in proxy 
behaviour for uncommon operations. I suppose proxy developers could fix 
these issues but whether a significant rollout could be achieved before 
HTML5 is released is questionable.


Given that an asynchronous connection cannot be cached the only reasons 
remaining for going through a proxy are anonymity and firewall 
traversal. Automatically bypassing the users proxy configuration to 
solve the issues above has the potential to break both of these. It 
would be a significant breach of trust for a UA to bypass the users 
proxy and some networks only allow connections via a proxy (for security 
and monitoring).


It seems that we're stuck between a rock and hard place here. In light 
of this I reiterate my earlier suggestion that the time could be better 
spent providing guidelines for communication via an asynchronous CGI 
interface. This would allow reuse of existing port 80 and 443 web 
services which would resolve the cross-domain issues (the CGI can relay 
the actual service via a backend connection) and most of the proxy 
issues above (since proxy GET and CONNECT are more reliable on these ports).


Shannon


Re: [whatwg] Workers feedback

2008-11-13 Thread Shannon
I don't see any value in the "user-agent specified amount of time" delay 
in stopping scripts. How can you write cleanup code when you have no 
consistency in how long it gets to run (or if it runs at all)? If you 
can't rely on a cleanup then it becomes necessary to have some kind of 
repair/validation sequence run on the data next time it is accessed to 
check if it's valid. If you can do that then you didn't really need a 
cleanup anyway. As far as I can tell the "user-agent specified amount of 
time" is going to be a major source of hard-to-spot, hard-to-test bugs 
(since full testing probably involves closing and killing browsing 
contexts in different ways followed by a login sequence and several page 
navigations to get back to the page). I can see authors maybe performing 
these tests in IE but not across a range of browsers and computer 
specifications.


The spec really needs to make a decision here. Either consistently 
provide no cleanup window or make it a requirement to provide a fixed 
number of seconds, which is still unreliable but at least within a 
smaller margin. Failure to do so will impact heavily on users of less 
popular browsers.



The specification for message ports is still limited to strings. If no 
effort is going to be made to allow numbers, arrays, structs and binary 
data then I'd suggest Workers be given functions to 
serialise/deserialise these objects. Since the whole point of workers is 
presumably the processing of large datasets then a reliable and 
low-overhead means of passing these sets between workers and main 
threads (without resorting to SQL, XMLHttpRequest or other indirection) 
is an essential function.



WorkerUtils does not implement document.cookie. I imagine this would be 
very useful in conjunction with cleanup code to flag if a cleanup 
operation failed to complete. Storage and Database interfaces are too 
heavy for the purpose of simple data like this.



Shannon


Re: [whatwg] Workers feedback

2008-11-16 Thread Shannon
Ian Hickson wrote:
> On Fri, 14 Nov 2008, Shannon wrote:
>   
>> I don't see any value in the "user-agent specified amount of time" delay 
>> in stopping scripts. How can you write cleanup code when you have no 
>> consistency in how long it gets to run (or if it runs at all)?
>> 
>
> The "user-agent specified amount of time" delay is implemented by every 
> major browser for every script they run today.
>
> How can people write clean code when they have no consistency in how long 
> their scripts will run (or if they will run at all)?
>
> Why is this any different?
>
>   
Why does that matter? I think you're asking the wrong question. As
designers of a new spec the question should be "how can we fix this?".
If the answer is to include a mandatory cleanup window for ALL scripts
then that should be considered (even if that window is 0 seconds).

>> If you can't rely on a cleanup then it becomes necessary to have some 
>> kind of repair/validation sequence run on the data next time it is 
>> accessed to check if it's valid.
>> 
>
> You need to do that anyway to handle powerouts and crashes.
>   
That was the point of my concern. Given that the only 100% reliable
cleanup window is 0 seconds it would be more consistent (and honest) to
make that the spec. Offering a cleanup window of uncertain length is
somewhat pointless and bound to cause incompatibilities across UAs. Is
there a strong argument against making 0 seconds mandatory, given that
anything else is inconsistent across UA, architecture and circumstance?

> It's not clear which document the cookie would be for. localStorage is 
> as light-weight than cookies (lighter-weight, arguably), though, so that 
> should be enough, no?
>
>   
Fair enough.


Shannon


Re: [whatwg] [Input type=submit] "Disable" after submitting

2010-01-26 Thread Adam Shannon
I'm sure you know this, but there is always this simple way.



On Tue, Jan 26, 2010 at 04:20, Randy Drielinger  wrote:
> Hi all,
>
> Bit of a crazy subject, but let me enlighten:
>
> Currently the submit button is used to submit a form once (more than average
> purpose). In some occassions it's possible to submit a form twice, and get
> it processed twice by the webserver, resulting in a double submit (e.g.
> forum post). For this specific example, it isn't desired bahavior.
>
> Should there be a way to prevent a (submit) button in HTML to be clicked
> more than once or is this clearly a behavior (and therefore be solved with
> JS)?
>
>
> Regards



-- 
Adam Shannon
 Web Developer
 http://ashannon.us


Re: [whatwg] Subtitles, captions, and other tracks augmenting video or audio

2010-04-17 Thread Adam Shannon
If I understand correctly you're just looking for real life cases of
text overlays on video/audio?

If so then:
 - Localized text alongside a tutorial video.
 - Song lyrics for audio/music videos
 - Localized information boxes for live streaming media elements
- During live stream (news channels, podcasts, sport events) the
(interviewee, host, player)'s information could be localized in real
time (the page can receive updated information which is customized to
user).

Features:
 - Large range of acceptable characters (multiple language support, symbols,...)
 - Simple tags like 
 - Ability to add more audio/video files at times
 - 
 - Caption/Text would appear as an element (div/span/p) that can be
assigned an id or class, that way the same CSS/js properties could be
appended.
- 
- If captions are elements then children elements could be added
for further customization.
 - Optionally image/svg/canvas could be allowed as content if the
overlays are html elements, this allows for further customization on
overlays.
- Allowing a site owner to apply user defined themes (for normal
page layout) on captions.
- Change how captions are displayed based on steps/progress in the media.

On Fri, Apr 16, 2010 at 16:08, Ian Hickson  wrote:
>
> I'm starting to look at the feedback sent over the past few years for
> augmenting audio and video with additional timed tracks such as subtitles,
> captions, audio descriptions, karaoke, slides, lyrics, ads, etc. One thing
> that would be really helpful is if we could get together a representative
> sample of typical uses of these features, as well as examples of some of
> the more extreme uses.
>
> For example, some fansubbed Anime videos have translation subtitles,
> karaoke in the original language and as a phonetic translation, and joke
> explanations all going on on the same screen. It would be good to have an
> example of this, even though it is so extreme that we might not want to
> support it initially.
>
> If anyone has any examples, please add them here:
>
>   http://wiki.whatwg.org/wiki/Timed_tracks
>
> Links to either videos or stills showing subtitles (e.g. on TVs, DVDs,
> etc) are both good. I'd like to get a representative sample so that we can
> determine what features are critical, and what features can be punted for
> now. This will let us evaluate the proposals relative to real needs.
>
> Cheers,
> --
> Ian Hickson               U+1047E                )\._.,--,'``.    fL
> http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>



-- 
Adam Shannon
 Web Developer
 http://ashannon.us


Re: [whatwg] input type=ink proposal

2010-06-08 Thread Adam Shannon
I can't remember any discussion (and searching my email returns no
results), The idea seems interesting and useful.

Have you thought about how it will work on mobile platforms (will the
touch screen act as the "pad"?).

I'm guessing that the browser could save an image of the coordinates
plotted together.

Use Case:
Draw on canvas with the pad, just like how "pads" can act like a mouse.

On Tue, Jun 8, 2010 at 11:26, Charles Pritchard  wrote:
> Has there been prior discussion about an input type="ink" form element?
>
> This element would simply capture coordinates from a mouse (or touch/pen
> input device),
> allowing the user to reset the value of the element at their discretion.
>
> InkML is in last call:
> http://www.w3.org/TR/InkML/
>
> We could use  elements as the data container:
> http://www.w3.org/TR/InkML/#trace
>
> It's a fairly simple element, there's no particular reason why it couldn't
> be widely supported.
>
> Digital signatures, amongst other things, have been common for quite awhile,
> just not
> in the browser.
>
>
> Use Case:
>
> As part of a web form, a user signs their digital signature to confirm
> acceptance of terms.
>
> Use Case:
>
> While filling out an online profile, a user submits a simple doodle as their
> avatar.
>
> Use Case:
>
> To quickly log into an online system, a user scribbles a password,
> which their server tests for fidelity to their prior scribbled input.
>
>
> -Charles
>
>



-- 
Adam Shannon
 Web Developer
 http://ashannon.us


Re: [whatwg] input type="location" proposals

2010-06-18 Thread Adam Shannon
How would the "browser" assist you?  Would you start to type in "new
yor" and it would drop down a list giving you options?

How will browsers get the list (and future) lists which have to be
customized to the input?  Remote listing?  Buit-in database?



On Fri, Jun 18, 2010 at 12:20, Eitan Adler  wrote:
> Two separate use cases
> 1) For entry of locations into something like Google Maps or MapQuest.
> In this case the form should look as it does now (a text box) but
> browsers would be able to assist you in entering locations like it can
> for for emails.
> 2) For entry of Lat/Long coordinates which can be entered either
> manually or with some kind of map like interface.
>
> These are two separate proposals and I both could co-exist one as
> type="location" and the other as type="gps"
>
> --
> Eitan Adler
>



-- 
Adam Shannon
 Web Developer
 http://ashannon.us


Re: [whatwg] input type="location" proposals

2010-06-18 Thread Adam Shannon
So, each person will be responsible for updating the address book?

On Fri, Jun 18, 2010 at 21:49, Kit Grose  wrote:
> System address book, perhaps?
>
> Cheers,
>
> Kit Grose
> User Experience + Technical Director
> iQmultimedia
>
> +61 (0)2 4260 7946
>
> On 19/06/2010, at 12:48 PM, "Adam Shannon"  wrote:
>
>> How would the "browser" assist you?  Would you start to type in "new
>> yor" and it would drop down a list giving you options?
>>
>> How will browsers get the list (and future) lists which have to be
>> customized to the input?  Remote listing?  Buit-in database?
>>
>>
>>
>> On Fri, Jun 18, 2010 at 12:20, Eitan Adler 
>> wrote:
>>> Two separate use cases
>>> 1) For entry of locations into something like Google Maps or
>>> MapQuest.
>>> In this case the form should look as it does now (a text box) but
>>> browsers would be able to assist you in entering locations like it
>>> can
>>> for for emails.
>>> 2) For entry of Lat/Long coordinates which can be entered either
>>> manually or with some kind of map like interface.
>>>
>>> These are two separate proposals and I both could co-exist one as
>>> type="location" and the other as type="gps"
>>>
>>> --
>>> Eitan Adler
>>>
>>
>>
>>
>> --
>> Adam Shannon
>> Web Developer
>> http://ashannon.us
>



-- 
Adam Shannon
 Web Developer
 http://ashannon.us


Re: [whatwg] Proposal for IsSearchProviderInstalled / AddSearchProvider

2011-05-16 Thread Adam Shannon
On Mon, May 16, 2011 at 18:29, Ian Hickson  wrote:
>> AddSearchProvider(string openSearchUrl, [optional] bool asDefault)
>> retrieves the open search document from openSearchUrl and decides in a
>> UA specific manner whether to prompt the user about the change or
>> addition.
>
> I haven't specified the "asDefault" argument since it isn't implemented
> anywhere except Chrome, but other than that I have specified it. The UI is
> left very open-ended. Note that there is already an equivalent declarative
> feature in HTML: .
>

(First, I like that asDefault hasn't been specified yet.)

I don't like having the only barrier between changing the default
search engine for a user's browser be a single dialog box. This list
(and others) have repeatedly found that dialogs don't work and users
skip past them.

Think of the non-techy user who simply clicks yes to evil.com's
request to change default search provider. Will they even know what
that means? Will they care at the time of the dialog? How will they
revert back?

I'd rather see UA's implement better controls on their end than see an
API which could be largely abused. (Drag and drop browser controls
over tons of sites asking for permission to be the default.)

-- 
Adam Shannon
Web Developer
University of Northern Iowa
Sophomore -- Computer Science B.S.
http://ashannon.us


Re: [whatwg] Proposal for IsSearchProviderInstalled / AddSearchProvider

2011-05-16 Thread Adam Shannon
On Mon, May 16, 2011 at 18:39, Ian Hickson  wrote:
> On Mon, 16 May 2011, Adam Shannon wrote:
>>
>> I don't like having the only barrier between changing the default search
>> engine for a user's browser be a single dialog box. This list (and
>> others) have repeatedly found that dialogs don't work and users skip
>> past them.
>>
>> Think of the non-techy user who simply clicks yes to evil.com's request
>> to change default search provider. Will they even know what that means?
>> Will they care at the time of the dialog? How will they revert back?
>>
>> I'd rather see UA's implement better controls on their end than see an
>> API which could be largely abused. (Drag and drop browser controls over
>> tons of sites asking for permission to be the default.)
>
> I agree. Note that the spec doesn't say there should be a dialog box at
> all; it's left entirely up to the UAs.
>
> --
> Ian Hickson               U+1047E                )\._.,--,'``.    fL
> http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>

Perhaps it would be better for the group to send a proposal for a UI
(or at least guidelines) that's acceptable both from a realistic
usability and security standpoint?

-- 
Adam Shannon
Web Developer
University of Northern Iowa
Sophomore -- Computer Science B.S.
http://ashannon.us


[whatwg] Persistent storage is critically flawed.

2006-08-27 Thread Shannon Baker
en untrusting 
parties.



== 4: Messy API requiring callbacks to handle concurrency. ==
The author uses a complicated method of handling concurrency by using 
callbacks triggered by setItem() to interrupt processing in other open 
pages (ie, other tabs or frames) which could access the same data. Why 
can I not simply lock the item during updates or long reads and force 
other scripts to wait? While I'm unsure wether ECMAscript can handle 
proper database-style transactions it seems like it would be fairly easy 
for the developer to implement critical sections by using shared storage 
objects or metadata as mutexes and semaphores. I can't see what role the 
callback mechanism would fulfill that could not be handled more easily 
using traditional transactional logic.



== Conclusion ==
In conclusion it appears to me that the proposal is based on several 
fundamentally flawed security assumptions and is overly complex. I see 
this becoming a hiding place for viruses, malware and tracking cookies. 
Any sensible browser manufacturer would turn this feature off or limit 
its scope - thus rendering it inoperable for the many beneficial uses it 
would otherwise have. Those browsers that support this proposal are 
likely to do so in incompatible ways - due largely to the faults and 
omissions in this proposal that it implies UA's will solve. It seems 
like a large amount of browser sniffing will be required to have any 
assurance that persistent storage will work as advertised. Therefore, 
the global storage proposal must be fixed or removed.



Shannon
Web Developer


Re: [whatwg] Persistent storage is critically flawed.

2006-08-27 Thread Shannon Baker
 storage data
items, or even fully fledged ACL APIs, but I don't think that should
be available in a first version, and I'm not sure it's really useful
in later versions either.
Any more or less complex or useful than the .secure flag? Readonly is an 
essential attribute in any shared data system from databases to 
filesystems. Would you advocate that all websites be world-writable just 
to simplify the API? Not that it should be hard to implement .readonly, 
as we already have metadata with each key.



I don't really understand what this is referring to. Could you show an
example of the transaction/callback system you refer to? The API is
intended to be really simple, just specify the item name and there you
go.
I'm refering to the "storage" event described in 5.9.6 which is fired in 
all active pages as data changes. This is an unusual proceedure that 
needs a better justification than those given in the spec. If the event 
pulls me out of my current function then how am I going to do anything 
useful with the application state (without really knowing where 
execution was interrupted)?



While I agree that there are valid concerns, I believe they are all
addressed explicitly in the spec, with suggested solutions.
You points are also quite valid however they ignore the root of my 
concerns - which is that the spec leaves too much up to the UA to 
resolve. I don't see how you can explicitly define something with a 
suggestion! The whole spec kind of 'hopes' that many disparate 
companies/groups will cooperate to make persistent storage work 
consistently across browsers. They might, but given both Microsoft and 
Netscapes track records I think things need to be more concrete in such 
an important spec.



I would be interested in seeing a concrete proposal for a better
solution; I don't really see what a better solution would be.


I'm not sure myself but I don't think it can stay the way it is. I would 
be happy to offer a better proposal or update the current one given 
enough time to consider it.


As a quick thought, the simplest approach might just be to require the 
site send a secret hash or public key in order to prove it 'owns' the 
key. The secret could even be a timestamp of the exact time the key was 
set or just a hash of the users site login. eg:


DOMAIN KEY  SECRET DATA
foo.bar  baz kj43h545j34h6jk534dfytyf  A string.

Just one idea.

Shannon
Web Developer


Re: [whatwg] Persistent storage is critically flawed.

2006-08-29 Thread Shannon Baker

Ian Hickson said (among other things):
It seems that what you are suggesting is that foo.example.com cannot trust 
example.com, because example.com could then steal data from 
foo.example.com. But there's a much simpler attack scenario for 
example.com: it can just take over foo.example.com directly. For example, 
it could insert new HTML code containing 

Re: [whatwg] Codecs for and

2009-07-01 Thread Adam Shannon
That would have to be done by each browser not the spec.  Some vendors would
include their own plugins that were safe so they may not feel the need to
sandbox them (even though they should).

On Wed, Jul 1, 2009 at 8:12 AM, Kristof Zelechovski
wrote:

> Regarding the fear of Trojan codecs: it would help if third-party plug-ins
> for codecs could be sandboxed so that they cannot have access to anything
> they do not have to access in order to do their job, and only via an API
> provided by the host.
> IMHO,
> Chris
>
>
>


-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Codecs for and

2009-07-02 Thread Adam Shannon
Do you have an idea on how to introduce fall back support for browsers that
don't even support , how will they be expected to implement a base64
string when they skip the element's attributes?
Might a  tag work with the src="" set to the same string as the base64?
 Or would that contradict the point of allowing , , and
 the extra abilities?  Because if people can just use
the  tag which is more comfortable to them, why would they feel
the urge to switch?

On Thu, Jul 2, 2009 at 8:51 PM, Charles Pritchard  wrote:

> I'd like to see some progress on these two tags.
>
> I'd like people to consider that Vorbis can be implemented
> in virtual machines (Java, Flash) which support raw PCM data.
> Theora is no different.
>
> I'd like to see  support added to the  tag (it's as natural
> as ).
> and enable the  tag to accept raw data lpcm),
> just as the  tag accepts raw data (bitmap).
>
> Then you can support any codec you create, as well as use system codecs.
>
> You can't make the impossible happen (no HD video on an old 300mhz
> machine),
> but you'd have the freedom to do the improbable.
>
> Add raw pcm and sound font support to ,
> add raw pixel support to  (via CanvasRenderingContext2D).
>
> And add an event handler when subtitles are enabled / disabled.
>
> I have further, more specific comments, below.
> and at the end of the e-mail, two additions to the standard.
>
>  Ian Hickson wrote:
>> I understand that people are disappointed that we can't require Theora
>> support. I am disappointed in the lack of progress on this issue also.
>>
>>
>> On Tue, 30 Jun 2009, Dr. Markus Walther wrote:
>>
>>
>>> Having removed everything else in these sections, I figured there wasn't
>>>> that much value in requiring PCM-in-Wave support. However, I will continue
>>>> to work with browser vendors directly and try to get a common codec at 
>>>> least
>>>> for audio, even if that is just PCM-in-Wave.
>>>>
>>>>
>>>
> I'd think that FLAC would make more sense than PCM-in-Wave,
> as a PNG analog.
>
> Consider the  element. PNG implementations may be broken.
> Internally,  accepts a raw byte array, a 32 bit bitmap, and
> allows a string-based export of a compressed bitmap,
> as a base64 encoded 32 bit png.
>
> The  element should accept a raw byte array, 32 bit per sample lpcm,
> and allow a similar export of base64 encoded file, perhaps using FLAC.
>
> Canvas can currently be used to render unsupported image formats (and
> mediate unsupported image containers),
> it's been proven with ActionScript that a virtual machine can also support
> otherwise unsupported audio codecs.
>
> I'd like to see a font analog in audio as well. Canvas supports the font
> attribute,
> audio could certainly support sound fonts. Use a generated pitch if your
> platform can't or doesn't store sound fonts.
>
>
>  Please, please do so - I was shocked to read that PCM-in-Wave as the
>>> minimal 'consensus' container for audio is under threat of removal, too.
>>>
>>>
>> There seems to be some confusion between codecs and containers.
> WAV, OGG, AVI and MKV are containers, OSC is another.
>
> Codecs are a completely separate matter.
>
> It's very clear that Apple will not distribute the Vorbis and Theora codecs
> with their software packages.
>
> It's likely that Apple would like to use a library they don't have to
> document,
> as required by most open source licenses, and they see no current reason to
> invest
> money into writing a new one. Apple supports many chipsets, and many
> content
> agreements, it would be costly.
>
> I see no reason why Apple could not support the OGG container.
> That said, I see no reason why a list of containers needs to be in the HTML
> 5 spec.
>
>  On Thu, 2 Jul 2009, Charles Pritchard wrote:
>>
>>
>>> Can the standard simply address video containers (OGG, MKV, AVI) ?
>>> Each container is fairly easy to implement and codecs can be identified
>>> within
>>> the container.
>>> Vendors can decide on their own what to do with that information.
>>>
>>>
>>
>> The spec does document how to distinguish containers via MIME type. Beyond
>> that I'm not sure what we can do.
>>
>>  does support fallback, so in practice you can just use Theora and
>> H.264 and cover all bases.
>>
>>
>
> I'd like to see this added to  and :
>
> "User agents should provide controls to enable the manual selection of
> fallback content."
>
> "User agents should provide an activation behavior, when fallback content
> is required, detailing why the primary content could not be used."
>
> Many non-technical users will want to know why there is a black screen (or
> still image), even though they can hear the audio.
>
>
> -Charles
>
>
>


-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] autobuffer on "new Audio" objects

2009-07-05 Thread Adam Shannon
What about slower, public, or WIFI connections that can't support 5 people
going to yahoo.com and having audio of interviews load?  Yahoo would think
that everyone would want to listen to at least the first ~15-30 seconds.

On Sun, Jul 5, 2009 at 7:27 PM, Robert O'Callahan wrote:

> When script creates an audio element using the "new Audio" constructor, the
> 'autobuffer' attribute should be automatically set on that element.
> Presumably scripts will only create audio elements that they actually intend
> to play.
>
> Rob
> --
> "He was pierced for our transgressions, he was crushed for our iniquities;
> the punishment that brought us peace was upon him, and by his wounds we are
> healed. We all, like sheep, have gone astray, each of us has turned to his
> own way; and the LORD has laid on him the iniquity of us all." [Isaiah
> 53:5-6]
>



-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] autobuffer on "new Audio" objects

2009-07-05 Thread Adam Shannon
On Sun, Jul 5, 2009 at 7:58 PM, Robert O'Callahan wrote:

> On Mon, Jul 6, 2009 at 12:36 PM, Adam Shannon wrote:
>
>> What about slower, public, or WIFI connections that can't support 5 people
>> going to yahoo.com and having audio of interviews load?  Yahoo would
>> think that everyone would want to listen to at least the first ~15-30
>> seconds.
>>
>
> What about them? I'm not sure what your point is.
>

There already low bandwidth would be crippled more than it already is. (By
loading audio files.

>
>
> I think we expect "new Audio" to be used for scripted playing of sounds,
> not to create in-page audio elements.
>

If that is the purpose for the  element then it may be missing out,
 I would love support for in-page audio, it could be used for podcasts,
radio, interviews, ect...

>
>
> Rob
> --
> "He was pierced for our transgressions, he was crushed for our iniquities;
> the punishment that brought us peace was upon him, and by his wounds we are
> healed. We all, like sheep, have gone astray, each of us has turned to his
> own way; and the LORD has laid on him the iniquity of us all." [Isaiah
> 53:5-6]
>



-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Codecs for and -- informative note?

2009-07-05 Thread Adam Shannon
On Sun, Jul 5, 2009 at 8:02 PM, Jim Jewett  wrote:

> Ian Hickson wrote:
> |   does support fallback, so in practice you can just use Theora
> and
> |  H.264 and cover all bases.
>
> Could you replace the codec section with at least an informative note
> to this effect?  Something like,
>
> "As of 2009, there is no single efficient codec which works on all
> modern browsers.  Content producers are encouraged to supply the video
> in both Theora and H.264 formats, as per the following example"
>
> (If there is an older royalty-free format that is universally
> supported, then please mention that as well, as it will still be
> sufficient for some types of videos, such as crude animations.)


The browser vendors were not able to implement the same codec (because of
patent's and copyrights), so no codec was able to be chosen. (
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-July/ )

>
>
> -jJ
>



-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Codecs for and -- informative note?

2009-07-06 Thread Adam Shannon
The spec (at least from what I know) wants to create a unified experience,
we don't want users to have a different experience from browser to browser.
Nor do developers want to implement hacks for every browser.  If no common
ground can be reached then maybe no common ground is better than common
ground, who knows it may spark new ideas that we haven't thought of yet.

On Mon, Jul 6, 2009 at 1:18 PM, Aryeh Gregor

> wrote:

> On Mon, Jul 6, 2009 at 4:01 AM, David Gerard wrote:
> > A spec that makes an encumbered format a "SHOULD" is unlikely to be
> > workable for those content providers, e.g. Wikimedia, who don't have
> > the money, and won't under principle, to put up stuff in a format
> > rendered radioactive by known enforced patents.
>
> That's why "should" is not the same as "must".  Those who have a good
> reason not to do it can decline to do it.
>



-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Make Vorbis a baseline codec for

2009-07-15 Thread Adam Shannon
On Wed, Jul 15, 2009 at 7:14 PM, Remco  wrote:

> A few years ago, Vorbis as a baseline codec for  was dismissed,
> because it was expected that the audio codec agreed upon to be used
> with  would also be used with . Now that agreement on a
> codec for  is out of the question, Vorbis can again be
> considered as a baseline codec for .
>
> To get the discussion started: a few reasons to require Vorbis for :
>
> * De facto baseline codec PCM WAV is ridiculous for music and spoken
> word - the major use cases of 
> * Vorbis is the best lossy audio codec
> * Vorbis is widely adopted by major companies in portable media players
> * Vorbis is royalty-free
>

It has been tried but Apple will not implement it due to hardware
limitations.


>
> Remco
>



-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Make Vorbis a baseline codec for

2009-07-15 Thread Adam Shannon
On Wed, Jul 15, 2009 at 7:24 PM, David Gerard  wrote:

> 2009/7/16 Adam Shannon :
>
> > It has been tried but Apple will not implement it due to hardware
> > limitations.
>
>
> Hardware limitations or patent limitations? Either seems ill-matched
> to evidence-based reasoning.
>
> What was Apple's issue with Vorbis audio? I'd like to hear from Apple on
> this.
>
> (Someone who is actually speaking for Apple, not someone who appears
> to be speaking for Apple then claims "oh I was just speaking as
> myself" when called on something unacceptable.)
>

This was from an email that Ian posted, I do not know if it is directly from
Apple.  I am just posting it as reference, you will have to ask Ian to the
source/creditability of the statement.

"Apple refuses to implement Ogg Theora in Quicktime by default (as
used by Safari), citing lack of hardware support and an uncertain
patent landscape."

 ( http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-June/020620.html
 )


>
> - d.
>



-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Removing versioning from HTML

2009-08-09 Thread Adam Shannon
On Sun, Aug 9, 2009 at 11:10 AM, Aaron Boodman  wrote:

> [If this has been discussed before, feel free to just point me there]
>
> I frequently see the comment on this list and in other forums that
> something is "too late" for HTML5, and therefore discussion should be
> deferred.
>
> I would like to propose that we get rid of the concepts of "versions"
> altogether from HTML. In reality, nobody supports all of HTML5. Each
> vendor supports a slightly different subset of the spec, along with
> some features that are outside the spec.
>
> This seems OK to me. Instead of insisting that a particular version of
> HTML is a monolithic unit that must be implemented in its entirety, we
> could have each feature (or logical group of features) spun off into
> its own small spec. We're already doing this a bit with things like
> Web Workers, but I don't see why we don't just do it for everything.
>
> Just as they do now, vendors would decide at the end of the day which
> features they would implement and which they would not. But we should
> never have to say that "the spec is too big". If somebody is
> interested in exploring an idea, they should be able to just start
> doing that.
>
> - a
>


If we never cut things off then the spec will really never be finished
before 2020.  I agree that somethings can be reopened but there are also
some which have been resolved and any new discussions are coming a year++
later.


-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Removing versioning from HTML

2009-08-09 Thread Adam Shannon
>
>
>
> On Sun, Aug 9, 2009 at 9:29 AM, Adam Shannon
> wrote:
> > If we never cut things off then the spec will really never be finished
> > before 2020.
>
> Why does this matter? At the end of the day isn't the goal to have the
> largest number of interoperable features? Consider one reality where
> we try to limit what HTML5 is, and it has n features, and we get an
> average of n*.9 features interoperably implemented on browsers by
> 2020. Consider an alternate reality where we let things be a bit more
> open-ended and we get n*1.5*.9 features interoperably implemented by
> 2020.
>
> Isn't the second reality better?
>
> - a
>

If you are looking for quantity of features then yes it is better, but if
you are looking for quality of implementations then the latter is not as
good.  I would highly prefer IE to have , , ,
geoLocation implemented in IE9 than "wasting" that update trying to decide
if we should reopen the codec issue (example of possible debate).
Sure, if we can wait 10 years for HTML5 then neither matters, but I don't
think that we have that option. A 20 year spec will not stand the test of
time nor provide the needed qualifications for the pace of development.

-- 
- Adam Shannon ( http://ashannon.us )


Re: [whatwg] Multipage spec

2009-09-30 Thread Adam Shannon
It looks back as of now. 16:18 CST

On Wed, Sep 30, 2009 at 3:03 PM, Remy Sharp  wrote:
> I'm sure someone is aware, but the multipage spec is broken, or even not
> there anymore.
>
> http://www.whatwg.org/specs/web-apps/current-work/multipage/
>
> Personally I find the multipage much faster to work with, and I'm sure I'm
> not alone.  So hopefully it can be brought back to life some time soon?
>
> Cheers,
>
> Remy Sharp
>



-- 
- Adam Shannon ( http://ashannon.us )