Re: [whatwg] [foaf-protocols] keygen substitute for Windows?

2010-01-19 Thread Henri Sivonen
On Jan 19, 2010, at 19:26, Bruno Harbulot wrote:

> In Firefox, this prints "SELECT":  is transformed on the fly into 
> , which breaks DOM usage. This is something that Opera and Safari 
> don't do.

FWIW, this is considered a bug.
https://bugzilla.mozilla.org/show_bug.cgi?id=101019

-- 
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/




Re: [whatwg] Proposal for related change to HTML5 section 4.8.3

2010-01-19 Thread Ian Hickson
On Tue, 8 Dec 2009, Chris Evans wrote:
>
> I propose changing this text:
> 
> "This flag also prevents script from reading the document.cookie IDL
> attribute."
> 
> to
> 
> "This flag also prevents script from reading or writing the document.cookie
> IDL attribute."

Done.

However, this is a purely non-normative change -- the text above has no 
conformance requirements, and has no effect on implementations. The spec 
already actually required that both reading and writing be blocked in the 
part of the spec that actually defines the document.cookie API.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Web-sockets + Web-workers to produce a P2P website or application

2010-01-19 Thread Andrew de Andrade
On Tue, Jan 19, 2010 at 5:31 PM, Melvin Carvalho
 wrote:
>
>
> On Tue, Jan 19, 2010 at 5:59 PM, Andrew de Andrade 
> wrote:
>>
>> I have an idea for a possible use case that as far as I can tell from
>> previous discussions on this list has not been considered or at least
>> not in the form I present below.
>>
>> I have a friend whose company produces and licenses online games for
>> social networks such as Facebook, Orkut, etc.
>>
>> One of the big problems with these games is the shear amount of static
>> content that must be delivered via HTTP once the application becomes
>> popular. In fact, if a game becomes popular overnight, the scaling
>> problems with this static content quickly becomes a technical and
>> financial problem.
>>
>> To give you an idea of the magnitude and scope, more than 4 TB of
>> static content is streamed on a given day for one of the applications.
>> It's very likely that others with similarly popular applications have
>> encountered the same challenge.
>>
>> When thinking about how to resolve this, I took my usual approach of
>> thinking how do we decentralize the content delivery and move towards
>> an agent-based message passing model so that we do not have a single
>> bottleneck technically and so we can dissipate the cost of delivering
>> this content.
>>
>> My idea is to use web-sockets to allow the browser function more a
>> less like a bit-torrent client. Along with this, web-workers would
>> provide threads for handling the code that would function as a server,
>> serving the static content to peers also using the program.
>>
>> If you have lots of users (thousands) accessing the same application,
>> you effectively have the equivalent of one torrent with a large swarm
>> of users, where the torrent is a package of the most frequently
>> requested static content. (I am assuming that the static content
>> requests follow a power law distribution, with only a few static files
>> being responsible for the overwhelming bulk of static data
>> transferred.).
>>
>> As I have only superficial knowledge of the technologies involved and
>> the capabilities of HTML5, I passed this idea by a couple of
>> programmer friends to get their opinions. Generally they thought is
>> was a very interesting idea, but that as far as they know, the
>> specification as it stands now is incapable of accommodating such a
>> use case.
>>
>> Together we arrived at a few criticisms of this idea that appear to be
>> resolvable:
>>
>> -- Privacy issues
>> -- Security issues (man in the middle attack).
>> -- content labeling (i.e. how does the browser know what content is
>> truly static and therefore safe to share.)
>> -- content signing (i.e. is there some sort of hash that allows the
>> peers to confirm that the content has not been adulterated).
>> -- privacy issues
>
> Yes I sort of see this kind of thing as the future of the web.  There's an
> argument to say that it should have been done 10 or even 20 years ago, but
> we're still not there.  I think websockets will be a huge step forward for
> this kind of thing.  One issue still remains NAT traversal, perhaps this is
> what has held developers back, though notable exceptions such as skype have
> provided a great UX here.
>
> Gaming is one obvious application for this, which in many ways is the
> pinnacle of software engineering.
>
> I see this kind of technique really bringing linked data into its own
> (including RDFa) where browsers become more data aware and more socially
> aware and are able interchage relevant information.   Something like FOAF
> (as a means to mark up data) is well suited to provide a distributed network
> of peers, can certainly handle global namespaced data naming, and is getting
> quite close to solving privacy and security challenges.
>
> Im really looking forward to seeing what people start to build on top of
> this technology, and your idea certainly sounds exciting.
>
>>
>> All in all, many of these issues have been solved by the many talented
>> programmers that have developed the current bit-torrent protocol,
>> algorithms and security features. The idea would simply to design the
>> HTML5 in such a way that it can permit the browser to function as a
>> full-fledged web-application bit-torrent client-server.
>>
>> Privacy issues can be resolved by possibly defining something such as
>> "browser security zones" or "content label" whereby the content
>> provider (application developer) labels content (such as images and
>> CSS files) as safe to share (static content) and labels dynamic
>> content (such as personal photos, documents, etc.) as unsafe to share.
>>
>> Also in discussing this, we come up with some potentially useful
>> extensions to this use case.
>>
>> One would be the versioning of the "torrent file", such that the
>> torrent file could represent versions of the application. i.e. I
>> release an application that is version 1.02 and it becomes very
>> popular and there is a sizable swarm. At some point

Re: [whatwg] Web-sockets + Web-workers to produce a P2P website or application

2010-01-19 Thread Melvin Carvalho
On Tue, Jan 19, 2010 at 5:59 PM, Andrew de Andrade
wrote:

> I have an idea for a possible use case that as far as I can tell from
> previous discussions on this list has not been considered or at least
> not in the form I present below.
>
> I have a friend whose company produces and licenses online games for
> social networks such as Facebook, Orkut, etc.
>
> One of the big problems with these games is the shear amount of static
> content that must be delivered via HTTP once the application becomes
> popular. In fact, if a game becomes popular overnight, the scaling
> problems with this static content quickly becomes a technical and
> financial problem.
>
> To give you an idea of the magnitude and scope, more than 4 TB of
> static content is streamed on a given day for one of the applications.
> It's very likely that others with similarly popular applications have
> encountered the same challenge.
>
> When thinking about how to resolve this, I took my usual approach of
> thinking how do we decentralize the content delivery and move towards
> an agent-based message passing model so that we do not have a single
> bottleneck technically and so we can dissipate the cost of delivering
> this content.
>
> My idea is to use web-sockets to allow the browser function more a
> less like a bit-torrent client. Along with this, web-workers would
> provide threads for handling the code that would function as a server,
> serving the static content to peers also using the program.
>
> If you have lots of users (thousands) accessing the same application,
> you effectively have the equivalent of one torrent with a large swarm
> of users, where the torrent is a package of the most frequently
> requested static content. (I am assuming that the static content
> requests follow a power law distribution, with only a few static files
> being responsible for the overwhelming bulk of static data
> transferred.).
>
> As I have only superficial knowledge of the technologies involved and
> the capabilities of HTML5, I passed this idea by a couple of
> programmer friends to get their opinions. Generally they thought is
> was a very interesting idea, but that as far as they know, the
> specification as it stands now is incapable of accommodating such a
> use case.
>
> Together we arrived at a few criticisms of this idea that appear to be
> resolvable:
>
> -- Privacy issues
> -- Security issues (man in the middle attack).
> -- content labeling (i.e. how does the browser know what content is
> truly static and therefore safe to share.)
> -- content signing (i.e. is there some sort of hash that allows the
> peers to confirm that the content has not been adulterated).
> -- privacy issues
>

Yes I sort of see this kind of thing as the future of the web.  There's an
argument to say that it should have been done 10 or even 20 years ago, but
we're still not there.  I think websockets will be a huge step forward for
this kind of thing.  One issue still remains NAT traversal, perhaps this is
what has held developers back, though notable exceptions such as skype have
provided a great UX here.

Gaming is one obvious application for this, which in many ways is the
pinnacle of software engineering.

I see this kind of technique really bringing linked data into its own
(including RDFa) where browsers become more data aware and more socially
aware and are able interchage relevant information.   Something like FOAF
(as a means to mark up data) is well suited to provide a distributed network
of peers, can certainly handle global namespaced data naming, and is getting
quite close to solving privacy and security challenges.

Im really looking forward to seeing what people start to build on top of
this technology, and your idea certainly sounds exciting.


>
> All in all, many of these issues have been solved by the many talented
> programmers that have developed the current bit-torrent protocol,
> algorithms and security features. The idea would simply to design the
> HTML5 in such a way that it can permit the browser to function as a
> full-fledged web-application bit-torrent client-server.
>
> Privacy issues can be resolved by possibly defining something such as
> "browser security zones" or "content label" whereby the content
> provider (application developer) labels content (such as images and
> CSS files) as safe to share (static content) and labels dynamic
> content (such as personal photos, documents, etc.) as unsafe to share.
>
> Also in discussing this, we come up with some potentially useful
> extensions to this use case.
>
> One would be the versioning of the "torrent file", such that the
> torrent file could represent versions of the application. i.e. I
> release an application that is version 1.02 and it becomes very
> popular and there is a sizable swarm. At some point in the future I
> release a new version with bug-fixes and additional features (such as
> CSS sprites for the social network game). I should be able to
> propagate this new ve

Re: [whatwg] [foaf-protocols] keygen substitute for Windows?

2010-01-19 Thread Story Henry

On 19 Jan 2010, at 17:26, Bruno Harbulot wrote:

> Hello Henry,
> 
> 
> Story Henry wrote:
>>> Whilst I'm very supportive of having a key-generation mechanism in the 
>>> browser, I'm now not entirely sure the  tag, at least as a legacy 
>>> of the Netscape  tag, is the correct approach.
>> I think that part of the html5 goals is to describe how browsers actually 
>> work, without going into endless debates about how they SHOULD work. Given 
>> that Netscape, Firefox, Opera and Safari implement the  tag - and 
>> have done so for a very long time - it seems quite reasonable to describe 
>> that behaviour in html5. 
> 
> As far as I understand,  was, if not officially deprecated, at least 
> not recommended in Firefox, since the introduction of generateCRMFRequest.
> 
> 
> I wouldn't say  is greatly implemented, even in Firefox.
> Consider the following HTML document:
> 
> 
> 
> 
> function writeTagName() {
>  document.getElementById("title").appendChild(
>  document.createTextNode(document.getElementById("keygen").tagName));
> }
> 
> 
> 
> 
> 
> 
> 
> 
> In Firefox, this prints "SELECT":  is transformed on the fly into 
> , which breaks DOM usage. This is something that Opera and Safari 
> don't do.

(( should not  be inside a  ? ))

> Even across Firefox, Opera and Safari, the behaviour of keygen isn't uniform.

I think they rarely are. This is why the WhatWG is documenting these 
inconsistencies...
The trick is to find overlaps in the behaviour, the differences, and the work 
out from there what options for development exist.

> The choice of "High grade" and "Low grade" is left to the appreciation of 
> Firefox, whereas a proper CA would certainly require a bit more precision. In 
> contrast, Opera offers a much longer list of key sizes, defaulting somewhere 
> around 1500 bits (I don't have Opera on this machine).

That is not a problem if the browser user interface is different. Perhaps the 
different browsers have different expectations on their users abilities. 
Perhaps keygen can be later extended to make it possible to be more precise. It 
seems to me that keygen is there to produce a key in a form, and send it with 
the form. How it gets the key is a lot less important.


> One of the other points (which I think I've seen mentioned on this mailing 
> list) is that  doesn't really fit as a form element. There's a number 
> of parameters that can be set to generate a pair of keys. Why assume that the 
> keysize (and only the keysize) is to be chosen by the user while all the 
> others are set within the page? It might make sense, in some circumstances to 
> have it all fixed on the page (by the service provider) or to let the user 
> also chose whether to use RSA or RSA for example. (Along the same argument, 
> why assume 'md5WithRSAEncryption' and not SHA-1?)
> It just looks like it doesn't belong in a form this way.

But certainly that type of thing could be added to a  extension?


> I'd go even further that this in fact: why always *generate* a key-pair?
> Whether it's used for an PKIX CA or FOAF+SSL, why not let the option to use 
> an existing pair of key available in whatever key store the browser has 
> access to? (That would in fact be quite useful for FOAF+SSL applications.)
> If I send a CSR to a CA for signing, does it (even can it) know where those 
> keys have been generated? Perhaps it might make more sense in some cases to 
> re-use an existing pair of keys available in a smart-card or even some 
> software key store.

Good idea. Since the private keys don't leave the store, the browser could ask 
the user to re-use a key. In fact I would say that if a browser vendor allowed 
a user to do this, he would not necessarily be going against the spirit of 
keygen.

>> Once this is described it is then possible to find ways either to extend on 
>> the current behaviour or to find ways to improve it. Until now this topic 
>> was only something a few people could discuss.
>>> More specifically:
>>> 
>>> 1. The more modern APIs (generateCRMFRequest [1] on Firefox or 
>>> CertEnroll/XEnroll on Internet Explorer [2]) appear to offer more options 
>>> in general, for example, where to store the private key, is it exportable, 
>>> etc. (I haven't looked in details, but I suspect it could be envisaged to 
>>> use some existing key material from a software store or smartcard too, for 
>>> example.)
>>> This raises the question as to whether a tag is sufficient or appropriate 
>>> to express what's required for a CA, or if an API (and more programming) is 
>>> required.
>> I think there should be a strong preference for declarative ways of doing 
>> things if possible, ie to use HTML tags. Moving over to javascript has 
>> always seemed to me to be breaking one foundational element of the web.
> 
> The problem is that there's only so much one can do declaratively in this 
> field, precisely because some of this involves the security architecture of 
> the overall system in which the browser run

Re: [whatwg] Web-sockets + Web-workers to produce a P2P website or application

2010-01-19 Thread Andrew de Andrade
I emailed this idea to my friend Patrich Chanezon
(, @chanezon)  from Google a few weeks
ago to get his initial thoughts on the idea as he has been posting
lots of links related to web sockets lately.

He suggested that the idea would be be implemented at the browser
level in C++. He also said that there are many privacy issues,
especially for social apps, with implementing P2P at the application
layer. Finally he added that his colleague at Google, Brad Neuberg
(, @bradneuberg), had a related project
named Paper Airplane that he worked on from 2001 to 2004.

Here's the link to Brad's paper on Paper Airplane from 2005.

http://codinginparadise.org/paperairplane/

-- Andrew J L de Andrade
@andrewdeandrade

On Tue, Jan 19, 2010 at 3:07 PM,   wrote:
> as someone who just listens in and is not technically savvy ...but is
> helping build interactive television and film production to be browser
> based... I really want to hear more about this.
>
> On Jan 19, 2010 11:59am, Andrew de Andrade  wrote:
>> I have an idea for a possible use case that as far as I can tell from
>>
>> previous discussions on this list has not been considered or at least
>>
>> not in the form I present below.
>>
>>
>>
>> I have a friend whose company produces and licenses online games for
>>
>> social networks such as Facebook, Orkut, etc.
>>
>>
>>
>> One of the big problems with these games is the shear amount of static
>>
>> content that must be delivered via HTTP once the application becomes
>>
>> popular. In fact, if a game becomes popular overnight, the scaling
>>
>> problems with this static content quickly becomes a technical and
>>
>> financial problem.
>>
>>
>>
>> To give you an idea of the magnitude and scope, more than 4 TB of
>>
>> static content is streamed on a given day for one of the applications.
>>
>> It's very likely that others with similarly popular applications have
>>
>> encountered the same challenge.
>>
>>
>>
>> When thinking about how to resolve this, I took my usual approach of
>>
>> thinking how do we decentralize the content delivery and move towards
>>
>> an agent-based message passing model so that we do not have a single
>>
>> bottleneck technically and so we can dissipate the cost of delivering
>>
>> this content.
>>
>>
>>
>> My idea is to use web-sockets to allow the browser function more a
>>
>> less like a bit-torrent client. Along with this, web-workers would
>>
>> provide threads for handling the code that would function as a server,
>>
>> serving the static content to peers also using the program.
>>
>>
>>
>> If you have lots of users (thousands) accessing the same application,
>>
>> you effectively have the equivalent of one torrent with a large swarm
>>
>> of users, where the torrent is a package of the most frequently
>>
>> requested static content. (I am assuming that the static content
>>
>> requests follow a power law distribution, with only a few static files
>>
>> being responsible for the overwhelming bulk of static data
>>
>> transferred.).
>>
>>
>>
>> As I have only superficial knowledge of the technologies involved and
>>
>> the capabilities of HTML5, I passed this idea by a couple of
>>
>> programmer friends to get their opinions. Generally they thought is
>>
>> was a very interesting idea, but that as far as they know, the
>>
>> specification as it stands now is incapable of accommodating such a
>>
>> use case.
>>
>>
>>
>> Together we arrived at a few criticisms of this idea that appear to be
>>
>> resolvable:
>>
>>
>>
>> -- Privacy issues
>>
>> -- Security issues (man in the middle attack).
>>
>> -- content labeling (i.e. how does the browser know what content is
>>
>> truly static and therefore safe to share.)
>>
>> -- content signing (i.e. is there some sort of hash that allows the
>>
>> peers to confirm that the content has not been adulterated).
>>
>> -- privacy issues
>>
>>
>>
>> All in all, many of these issues have been solved by the many talented
>>
>> programmers that have developed the current bit-torrent protocol,
>>
>> algorithms and security features. The idea would simply to design the
>>
>> HTML5 in such a way that it can permit the browser to function as a
>>
>> full-fledged web-application bit-torrent client-server.
>>
>>
>>
>> Privacy issues can be resolved by possibly defining something such as
>>
>> "browser security zones" or "content label" whereby the content
>>
>> provider (application developer) labels content (such as images and
>>
>> CSS files) as safe to share (static content) and labels dynamic
>>
>> content (such as personal photos, documents, etc.) as unsafe to share.
>>
>>
>>
>> Also in discussing this, we come up with some potentially useful
>>
>> extensions to this use case.
>>
>>
>>
>> One would be the versioning of the "torrent file", such that the
>>
>> torrent file could represent versions of the application. i.e. I
>>
>> release an application that is version 1.02 and it becomes very
>>

Re: [whatwg] [foaf-protocols] keygen substitute for Windows?

2010-01-19 Thread Bruno Harbulot

Hello Henry,


Story Henry wrote:
Whilst I'm very supportive of having a key-generation mechanism in the 
browser, I'm now not entirely sure the  tag, at least as a 
legacy of the Netscape  tag, is the correct approach.


I think that part of the html5 goals is to describe how browsers actually work, without going into endless debates about how they SHOULD work. Given that Netscape, Firefox, Opera and Safari implement the  tag - and have done so for a very long time - it seems quite reasonable to describe that behaviour in html5. 


As far as I understand,  was, if not officially deprecated, at 
least not recommended in Firefox, since the introduction of 
generateCRMFRequest.



I wouldn't say  is greatly implemented, even in Firefox.
Consider the following HTML document:




function writeTagName() {
  document.getElementById("title").appendChild(
  document.createTextNode(document.getElementById("keygen").tagName));
}








In Firefox, this prints "SELECT":  is transformed on the fly 
into , which breaks DOM usage. This is something that Opera and 
Safari don't do.



Even across Firefox, Opera and Safari, the behaviour of keygen isn't 
uniform.
The choice of "High grade" and "Low grade" is left to the appreciation 
of Firefox, whereas a proper CA would certainly require a bit more 
precision. In contrast, Opera offers a much longer list of key sizes, 
defaulting somewhere around 1500 bits (I don't have Opera on this machine).



One of the other points (which I think I've seen mentioned on this 
mailing list) is that  doesn't really fit as a form element. 
There's a number of parameters that can be set to generate a pair of 
keys. Why assume that the keysize (and only the keysize) is to be chosen 
by the user while all the others are set within the page? It might make 
sense, in some circumstances to have it all fixed on the page (by the 
service provider) or to let the user also chose whether to use RSA or 
RSA for example. (Along the same argument, why assume 
'md5WithRSAEncryption' and not SHA-1?)

It just looks like it doesn't belong in a form this way.


I'd go even further that this in fact: why always *generate* a key-pair?
Whether it's used for an PKIX CA or FOAF+SSL, why not let the option to 
use an existing pair of key available in whatever key store the browser 
has access to? (That would in fact be quite useful for FOAF+SSL 
applications.)
If I send a CSR to a CA for signing, does it (even can it) know where 
those keys have been generated? Perhaps it might make more sense in some 
cases to re-use an existing pair of keys available in a smart-card or 
even some software key store.




Once this is described it is then possible to find ways either to extend on the 
current behaviour or to find ways to improve it. Until now this topic was only 
something a few people could discuss.


More specifically:

1. The more modern APIs (generateCRMFRequest [1] on Firefox or 
CertEnroll/XEnroll on Internet Explorer [2]) appear to offer more 
options in general, for example, where to store the private key, is it 
exportable, etc. (I haven't looked in details, but I suspect it could be 
envisaged to use some existing key material from a software store or 
smartcard too, for example.)
This raises the question as to whether a tag is sufficient or 
appropriate to express what's required for a CA, or if an API (and more 
programming) is required.


I think there should be a strong preference for declarative ways of doing 
things if possible, ie to use HTML tags. Moving over to javascript has always 
seemed to me to be breaking one foundational element of the web.


The problem is that there's only so much one can do declaratively in 
this field, precisely because some of this involves the security 
architecture of the overall system in which the browser runs, which by 
essence will have parts that do not belong within the browser, or at 
least ought to be outside the direct reach of what HTML can do (Windows 
certificate store, Apple Keychain...).




As proof of the advantage of this way of working: the keygen tag has functioned 
across browser generations without change (I think).


Well, I haven't followed the complete history of generateCRMFRequest, 
but there must be a reason why it was invented as a successor of 
. I have no idea what the ratio of modern CAs that still use 
 vs. those that use generateCRMFRequest is. The one I use 
regularly seems to use generateCRMFRequest.




Microsoft's ActiveX component on the other hand (as I understand required the 
calling of a Windows specific binary technology! The naming of a dll. This 
meant that when they changed the dll code that was written for browsers also 
had to change!

http://msdn.microsoft.com/en-us/library/bb931379%28VS.85%29.aspx

[[
Prior to Windows Vista, the Certificate Enrollment Control was implemented in 
Xenroll.dll. The Xenroll.dll library has been removed from the operating system 
and replaced by CertEnroll..dll.]]

The web is d

Re: [whatwg] Web-sockets + Web-workers to produce a P2P website or application

2010-01-19 Thread dlwillson
as someone who just listens in and is not technically savvy ...but is  
helping build interactive television and film production to be browser  
based... I really want to hear more about this.


On Jan 19, 2010 11:59am, Andrew de Andrade  wrote:

I have an idea for a possible use case that as far as I can tell from



previous discussions on this list has not been considered or at least



not in the form I present below.





I have a friend whose company produces and licenses online games for



social networks such as Facebook, Orkut, etc.





One of the big problems with these games is the shear amount of static



content that must be delivered via HTTP once the application becomes



popular. In fact, if a game becomes popular overnight, the scaling



problems with this static content quickly becomes a technical and



financial problem.





To give you an idea of the magnitude and scope, more than 4 TB of



static content is streamed on a given day for one of the applications.



It's very likely that others with similarly popular applications have



encountered the same challenge.





When thinking about how to resolve this, I took my usual approach of



thinking how do we decentralize the content delivery and move towards



an agent-based message passing model so that we do not have a single



bottleneck technically and so we can dissipate the cost of delivering



this content.





My idea is to use web-sockets to allow the browser function more a



less like a bit-torrent client. Along with this, web-workers would



provide threads for handling the code that would function as a server,



serving the static content to peers also using the program.





If you have lots of users (thousands) accessing the same application,



you effectively have the equivalent of one torrent with a large swarm



of users, where the torrent is a package of the most frequently



requested static content. (I am assuming that the static content



requests follow a power law distribution, with only a few static files



being responsible for the overwhelming bulk of static data



transferred.).





As I have only superficial knowledge of the technologies involved and



the capabilities of HTML5, I passed this idea by a couple of



programmer friends to get their opinions. Generally they thought is



was a very interesting idea, but that as far as they know, the



specification as it stands now is incapable of accommodating such a



use case.





Together we arrived at a few criticisms of this idea that appear to be



resolvable:





-- Privacy issues



-- Security issues (man in the middle attack).



-- content labeling (ie how does the browser know what content is



truly static and therefore safe to share.)



-- content signing (ie is there some sort of hash that allows the



peers to confirm that the content has not been adulterated).



-- privacy issues





All in all, many of these issues have been solved by the many talented



programmers that have developed the current bit-torrent protocol,



algorithms and security features. The idea would simply to design the



HTML5 in such a way that it can permit the browser to function as a



full-fledged web-application bit-torrent client-server.





Privacy issues can be resolved by possibly defining something such as



"browser security zones" or "content label" whereby the content



provider (application developer) labels content (such as images and



CSS files) as safe to share (static content) and labels dynamic



content (such as personal photos, documents, etc.) as unsafe to share.





Also in discussing this, we come up with some potentially useful



extensions to this use case.





One would be the versioning of the "torrent file", such that the



torrent file could represent versions of the application. ie I



release an application that is version 1.02 and it becomes very



popular and there is a sizable swarm. At some point in the future I



release a new version with bug-fixes and additional features (such as



CSS sprites for the social network game). I should be able to



propagate this new version to all clients in the swarm so that over



some time window such as 2 to 4 hours all clients in the swarm



discover (via push or pull) the new version and end up downloading it



from the peers with the new version. The only security feature I could



see that would be required would be that once a client discovers that



their is a new version, it would hit up the original server to



download a signature/fingerprint file to verify that the new version



that it is downloading from its peers is legitimate.





The interesting thing about this idea is that it would permit large



portions of sites to exist in virtual form. Long-term I can imagine



large non-profit sites such as Wikipedia functioning on top of this



structure in such a way that it greatly reduces the amount of funding



necessary. I

[whatwg] Web-sockets + Web-workers to produce a P2P website or application

2010-01-19 Thread Andrew de Andrade
I have an idea for a possible use case that as far as I can tell from
previous discussions on this list has not been considered or at least
not in the form I present below.

I have a friend whose company produces and licenses online games for
social networks such as Facebook, Orkut, etc.

One of the big problems with these games is the shear amount of static
content that must be delivered via HTTP once the application becomes
popular. In fact, if a game becomes popular overnight, the scaling
problems with this static content quickly becomes a technical and
financial problem.

To give you an idea of the magnitude and scope, more than 4 TB of
static content is streamed on a given day for one of the applications.
It's very likely that others with similarly popular applications have
encountered the same challenge.

When thinking about how to resolve this, I took my usual approach of
thinking how do we decentralize the content delivery and move towards
an agent-based message passing model so that we do not have a single
bottleneck technically and so we can dissipate the cost of delivering
this content.

My idea is to use web-sockets to allow the browser function more a
less like a bit-torrent client. Along with this, web-workers would
provide threads for handling the code that would function as a server,
serving the static content to peers also using the program.

If you have lots of users (thousands) accessing the same application,
you effectively have the equivalent of one torrent with a large swarm
of users, where the torrent is a package of the most frequently
requested static content. (I am assuming that the static content
requests follow a power law distribution, with only a few static files
being responsible for the overwhelming bulk of static data
transferred.).

As I have only superficial knowledge of the technologies involved and
the capabilities of HTML5, I passed this idea by a couple of
programmer friends to get their opinions. Generally they thought is
was a very interesting idea, but that as far as they know, the
specification as it stands now is incapable of accommodating such a
use case.

Together we arrived at a few criticisms of this idea that appear to be
resolvable:

-- Privacy issues
-- Security issues (man in the middle attack).
-- content labeling (i.e. how does the browser know what content is
truly static and therefore safe to share.)
-- content signing (i.e. is there some sort of hash that allows the
peers to confirm that the content has not been adulterated).
-- privacy issues

All in all, many of these issues have been solved by the many talented
programmers that have developed the current bit-torrent protocol,
algorithms and security features. The idea would simply to design the
HTML5 in such a way that it can permit the browser to function as a
full-fledged web-application bit-torrent client-server.

Privacy issues can be resolved by possibly defining something such as
"browser security zones" or "content label" whereby the content
provider (application developer) labels content (such as images and
CSS files) as safe to share (static content) and labels dynamic
content (such as personal photos, documents, etc.) as unsafe to share.

Also in discussing this, we come up with some potentially useful
extensions to this use case.

One would be the versioning of the "torrent file", such that the
torrent file could represent versions of the application. i.e. I
release an application that is version 1.02 and it becomes very
popular and there is a sizable swarm. At some point in the future I
release a new version with bug-fixes and additional features (such as
CSS sprites for the social network game). I should be able to
propagate this new version to all clients in the swarm so that over
some time window such as 2 to 4 hours all clients in the swarm
discover (via push or pull) the new version and end up downloading it
from the peers with the new version. The only security feature I could
see that would be required would be that once a client discovers that
their is a new version, it would hit up the original server to
download a signature/fingerprint file to verify that the new version
that it is downloading from its peers is legitimate.

The interesting thing about this idea is that it would permit large
portions of sites to exist in virtual form. Long-term I can imagine
large non-profit sites such as Wikipedia functioning on top of this
structure in such a way that it greatly reduces the amount of funding
necessary. It would be partially distributed with updates to wikipedia
being distributed via lots of tiny versions from super-nodes à la a
Skype type P2P model.

This would also take a lot of power out of the hands of those telcos
that are anti-net neutrality. This feature would basically permit a
form of net neutrality by moving content to the fringes of the
network.

Let me know your thoughts and if you think this would be possible
using Web-sockets and web-workers, and if not, what ch

Re: [whatwg] [foaf-protocols] keygen substitute for Windows?

2010-01-19 Thread Story Henry
> Hello,
> 
> Apologies for the late participation on this topic. I've been working on 
> FOAF+SSL with Henry Story (who advocated a few months ago the 
> introduction of  in HTML 5 for this purpose).
> I've only just found the time to investigate the certificate generation 
> issue on Windows/Internet Explorer (using ActiveX, XEnroll and 
> CertEnroll). I've updated this wiki page accordingly: 
> 
> http://esw.w3.org/topic/foaf%2Bssl

Thanks for this great investigative work.

> 
> Whilst I'm very supportive of having a key-generation mechanism in the 
> browser, I'm now not entirely sure the  tag, at least as a 
> legacy of the Netscape  tag, is the correct approach.

I think that part of the html5 goals is to describe how browsers actually work, 
without going into endless debates about how they SHOULD work. Given that 
Netscape, Firefox, Opera and Safari implement the  tag - and have done 
so for a very long time - it seems quite reasonable to describe that behaviour 
in html5. 

Once this is described it is then possible to find ways either to extend on the 
current behaviour or to find ways to improve it. Until now this topic was only 
something a few people could discuss.

> More specifically:
> 
> 1. The more modern APIs (generateCRMFRequest [1] on Firefox or 
> CertEnroll/XEnroll on Internet Explorer [2]) appear to offer more 
> options in general, for example, where to store the private key, is it 
> exportable, etc. (I haven't looked in details, but I suspect it could be 
> envisaged to use some existing key material from a software store or 
> smartcard too, for example.)
> This raises the question as to whether a tag is sufficient or 
> appropriate to express what's required for a CA, or if an API (and more 
> programming) is required.

I think there should be a strong preference for declarative ways of doing 
things if possible, ie to use HTML tags. Moving over to javascript has always 
seemed to me to be breaking one foundational element of the web.

As proof of the advantage of this way of working: the keygen tag has functioned 
across browser generations without change (I think).

Microsoft's ActiveX component on the other hand (as I understand required the 
calling of a Windows specific binary technology! The naming of a dll. This 
meant that when they changed the dll code that was written for browsers also 
had to change!

http://msdn.microsoft.com/en-us/library/bb931379%28VS.85%29.aspx

[[
Prior to Windows Vista, the Certificate Enrollment Control was implemented in 
Xenroll.dll. The Xenroll.dll library has been removed from the operating system 
and replaced by CertEnroll.dll.]]

The web is described with no reference to CPU architecture. I am seriously 
against bringing such low level aspects into day to day web programming. 

> 
> 2. The SPKAC format seems to be a legacy format. It doesn't really allow 
> to convey much information that CAs would expect, unlike other formats 
> used by the more modern APIs [3][4]. Perhaps it would be better to use 
> one of the newer formats instead. This might break the compatibility 
> with the pre-HTML 5 use of  (maybe another name than  in 
> HTML5 would be better?).

Perhaps extensions to keygen would be an interesting idea. 
At least it is document now.

> 
> Of course, the other big question is whether it's worth trying to 
> standardise this  tag if there's no intent of support from major 
> browser vendors (I have IE in mind here).

There are 3 browser vendors that have implemented it. That is enough of a 
precedent to standardise. If one browser vendor requires people to use binaries 
that tie people to their platform, it seems that it is quite clear what the 
reasons for that may be, and those reasons have in the past been deemed legally 
condemnable by both US and EU courts. Let us rather assume that that vendor 
decided to pursue that activity due to lack of standardisation in this space. 

Henry

> 
> Best wishes,
> 
> Bruno.
> 
> 
> [1] https://developer.mozilla.org/en/GenerateCRMFRequest
> [2] http://msdn.microsoft.com/en-us/library/aa374863%28VS.85%29.aspx
> [3] http://tools.ietf.org/html/rfc2986
> [4] http://tools.ietf.org/html/rfc4211



Re: [whatwg] Microdata feedback

2010-01-19 Thread Ian Hickson

On Mon, 18 Jan 2010, Aryeh Gregor wrote:
> On Mon, Jan 18, 2010 at 7:58 AM, Ian Hickson  wrote:
> > I've made it redirect to the spec.
> 
> Could you say that the URL *should* provide human-readable information 
> about the vocabulary?  We all know the problems with having 
> centrally-stored machine-readable data about your specs, but encouraging 
> the URL to provide human-readable info seems helpful.  (If they aren't 
> supposed to be dereferenced, why use HTTP?)

Why indeed. Is there something else we could use instead?


> > Graphs are intended to be supported in v2, using a mechanism
> 
> You seem to have left this sentence unfinished.

...using a mechanism intended for that purpose. Nothing to see here. :-)


On Mon, 18 Jan 2010, Julian Reschke wrote:
> 
> SHOULD return human-readable information is good, if you also add SHOULD 
> NOT automatically dereference.

I've added something akin to that SHOULD NOT, but the spec doesn't have a 
"specification" conformance class, so there's nothing to apply the SHOULD 
to. So I haven't added it. (I don't generally think specifications being 
conformance classes really makes much sense.)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'