Re: [WARP] Extending access to local network resources

2010-01-20 Thread Arve Bersvendsen
On Thu, 14 Jan 2010 19:04:18 +0100, Stephen Jolly  
 wrote:



There are a very large number of such networks in use worldwide: I  
believe that the vast majority of home networks and many wireless  
networks fall into this category.  The BBC is specifically concerned  
that the lack of distinction between local network resources and  
Internet resources in the current WARP model could prevent widgets from  
being able to access network resources served from media devices on home  
networks.


Anyway, the specific proposal I would like to make is for another  
special value of the "origin" attribute of the "access" element in the  
widget configuration document called "link-local" or similar, an  
associated special value in the access-request list and an associated  
processing rule that says that when the "link-local" value is specified,  
the user agent should grant access to network resources on the same  
subnet(s) as the user agent.


Just so we are on the same page here, by link-local, you mean exactly what  
(for IPv4) is defined in RFC 3927, which roughly translates to «Two  
devices connected directly, without involvment of DHCP» - a.k.a.  
169.254.0.0/16?


2. Users are likely to want control over which specific networks a  
widget is granted access to, rather than just a blanket "yes" or "no"  
permission to access whatever local network(s) to which the host may be  
connected when the widget is running.  I don't think that this is  
something that can or should be dealt with in the configuration of  
widgets.  I believe that good user experiences can be constructed to  
give the user that control, but I won't go into detail unless somebody  
asks me to.


I don't think going into detail is necessary at this stage.

3. I would expect most *useful* widgets that can access local network  
resources to need some kind of ability to browse the local link for  
resources to access.  Again, I think that's out of scope for a WARP  
alteration/supplement; it's the sort of thing people use mDNS + DNS-SD  
or UPNP's SSDP for, but those aren't web protocols, and Robin's  
threatened to drag me into the DAP WG if I start talking about device  
APIs.


The mDNS-bit is about local service discovery, and likely belongs in the  
DAP, yes.



4. Clearly the "local network" and the "local link" are not necessarily  
the same thing.  I'm proposing a solution targeting the latter because  
(a) it actually has a useful definition and (b) I believe it to be  
sufficient for the use cases I care about.


Provided my understanding of link-local is in line with yours, I would  
prefer a mechanism for accessing the local network.


I look forward to your comments and criticisms - in particular I would  
like to understand the holes that Arve says are opened by making a  
distinction between "local" and "remote" IP addresses.


To moderate my statement a bit - it's more a concern than a risk, when you  
at all allow access to "local network", and you have relaxed cross-domain  
security, a maliciously crafted widget can potentially attack local  
networking equipment such as routers. (This risk also exists on the web,  
but is generally less practical, given that an attacker would be shooting  
blind due to the same-origin-policy)


The other problem is one of the definition of local network not being  
entirely clear - the archetypal definition is the IPv4 one with four  
reserved IP ranges.  That definition breaks for IPv6, and it breaks for  
networks not using NAT.  In order to have a useful definition, the network  
would have to provide information about the locality of any given host a  
widget tries to access.




--
Arve Bersvendsen

Opera Software ASA, http://www.opera.com/



Re: FYI: review of draft-abarth-mime-sniff-03

2010-01-20 Thread Adam Barth
Thanks Larry.  I'm not subscribed to apps-discuss, so I might not see
discussion that takes place there.  I'm not sure what the best venue
for discussing the draft might be.  I suspect that public-html,
whatwg, or ietf-http-wg might have the most knowledgeable folks.

Adam


On Wed, Jan 20, 2010 at 3:27 PM, Larry Masinter  wrote:
> Since raised on W3C TAG
> http://lists.w3.org/Archives/Public/www-tag/2010Jan/0076.html:
>
> I reviewed draft-abarth-mime-sniff. I’m not sure I found all of the past
> discussion on the document, and I probably got some wrong, but it hasn’t
> been updated in quite a while.
>
> I sent the review to apps-discuss (since it deals with non-HTTP sniffing as
> well):
>
> http://www.ietf.org/mail-archive/web/apps-discuss/current/msg01250.html
>
> (discussion on apps-disc...@ietf.org)
>
> Since there are several W3C documents advancing that make normative
> reference to this, getting timely review should be a priority.
>
> Larry
> --
> http://larry.masinter.net



Re: File API: Blob and underlying file changes.

2010-01-20 Thread Eric Uhrhane
On Wed, Jan 20, 2010 at 3:23 PM, Dmitry Titov  wrote:
> On Wed, Jan 20, 2010 at 2:30 PM, Eric Uhrhane  wrote:
>>
>> I think it could.  Here's a third option:
>> Make all blobs, file-based or not, just as async as the blobs in
>> option 2.  They never do sync IO, but could potentially fail future
>> read operations if their metadata is out of date [e.g. reading beyond
>> EOF].  However, expose the modification time on File via an async
>> method and allow the user to pass it in to a read call to enforce
>> "fail if changed since this time".  This keeps all file accesses
>> async, but still allows for chunked uploads without mixing files
>> accidentally.  If we allow users to refresh the modification time
>> asynchronously, it also allows for adding a file to a form, changing
>> the file on disk, and then uploading the new file.  The user would
>> look up the mod time when starting the upload, rather than when the
>> file's selected.
>
> It would be great to avoid sync file I/O on calls like Blob.size. They would
> simply return cached value. Actual mismatch would be detected during actual
> read operation.
> However then I'm not sure how to keep File derived from Blob, since:
> 1) Currently, in FF and WebKit File.fileSize is a sync I/O that returns
> current file size. The current spec says File is derived from Blob and Blob
> has Blob.size property that is likely going to co-exist with File.fileSize
> for a while, for compat reasons. It's weird for file.size and file.fileSize
> to return different things.

True, but we'd probably want to deprecate file.fileSize anyway and
then get rid of it, since it's synchronous.

> 2) Currently, xhr.send(file) does not fail and sends the version of the file
> that exists somewhere around xhr.send(file) call was issued. Since File is
> also a Blob, xhr.send(blob) would behave the same which means if we want to
> preserve this behavior the Blob can not fail async read operation if file
> has changed.
> There is a contradiction here. One way to resolve it would be to break "File
> is Blob" and to be able to "capture the File as Blob" by having
> file.getAsBlob(). The latter would make a snapshot of the state of the file,
> to be able to fail subsequent async read operations if the file has been
> changed.
> I've asked a few people around in a non-scientific poll and it seems
> developers expect Blob to be a 'snapshot', reflecting the state of the file
> (or Canvas if we get Canvas.getBlob(...)) at the moment of Blob creation.
> Since it's obviously bad to actually copy data, it seems acceptable to
> capture enough information (like mod time) so the read operations later can
> fail if underlying storage has been changed. It feels really strange if
> reading the Blob can yield some data from one version of a file (or Canvas)
> mixed with some data from newer version, without any indication that this is
> happening.
> All that means there is an option 3:
> 3. Treat all Blobs as 'snapshots' that refer to the range of underlying data
> at the moment of creation of the Blob. Blobs produced further by
> Blob.slice() operation inherit the captured state w/o actually verifying it
> against 'live' underlying objects like files. All Blobs can be 'read' (or
> 'sent') via operations that can fail if the underlying content has changed.
> Optionally, expose snapshotTime property and perhaps "read if not changed
> since" parameter to read operations. Do not derive File from Blob, rather
> have File.getAsBlob() that produces a Blob which is a snapshot of the file
> at the moment of call. The advantage here is that it removes need for sync
> operations from Blob and provides mechanism to ensure the changing
> underlying storage is detectable. The disadvantage is a bit more complexity
> and bigger change to File spec.

That sounds good to me.  If we're treating blobs as snapshots, I
retract my suggestion of the read-if-not-changed-since parameter.  All
reads after the data has changed should fail.  If you want to do a
chunked upload, don't snapshot your file into a blob until you're
ready to start.  Once you've done that, just slice off parts of the
blob, not the file.



FYI: review of draft-abarth-mime-sniff-03

2010-01-20 Thread Larry Masinter
Since raised on W3C TAG 
http://lists.w3.org/Archives/Public/www-tag/2010Jan/0076.html:

I reviewed draft-abarth-mime-sniff. I'm not sure I found all of the past 
discussion on the document, and I probably got some wrong, but it hasn't been 
updated in quite a while.

I sent the review to apps-discuss (since it deals with non-HTTP sniffing as 
well):

http://www.ietf.org/mail-archive/web/apps-discuss/current/msg01250.html

(discussion on apps-disc...@ietf.org)

Since there are several W3C documents advancing that make normative reference 
to this, getting timely review should be a priority.

Larry
--
http://larry.masinter.net




Re: File API: Blob and underlying file changes.

2010-01-20 Thread Dmitry Titov
On Wed, Jan 20, 2010 at 2:30 PM, Eric Uhrhane  wrote:

> I think it could.  Here's a third option:
>
> Make all blobs, file-based or not, just as async as the blobs in
> option 2.  They never do sync IO, but could potentially fail future
> read operations if their metadata is out of date [e.g. reading beyond
> EOF].  However, expose the modification time on File via an async
> method and allow the user to pass it in to a read call to enforce
> "fail if changed since this time".  This keeps all file accesses
> async, but still allows for chunked uploads without mixing files
> accidentally.  If we allow users to refresh the modification time
> asynchronously, it also allows for adding a file to a form, changing
> the file on disk, and then uploading the new file.  The user would
> look up the mod time when starting the upload, rather than when the
> file's selected.


It would be great to avoid sync file I/O on calls like Blob.size. They would
simply return cached value. Actual mismatch would be detected during actual
read operation.

However then I'm not sure how to keep File derived from Blob, since:

1) Currently, in FF and WebKit File.fileSize is a sync I/O that returns
current file size. The current spec says File is derived from Blob and Blob
has Blob.size property that is likely going to co-exist with File.fileSize
for a while, for compat reasons. It's weird for file.size and file.fileSize
to return different things.

2) Currently, xhr.send(file) does not fail and sends the version of the file
that exists somewhere around xhr.send(file) call was issued. Since File is
also a Blob, xhr.send(blob) would behave the same which means if we want to
preserve this behavior the Blob can not fail async read operation if file
has changed.

There is a contradiction here. One way to resolve it would be to break "File
is Blob" and to be able to "capture the File as Blob" by having
file.getAsBlob(). The latter would make a snapshot of the state of the file,
to be able to fail subsequent async read operations if the file has been
changed.

I've asked a few people around in a non-scientific poll and it seems
developers expect Blob to be a 'snapshot', reflecting the state of the file
(or Canvas if we get Canvas.getBlob(...)) at the moment of Blob creation.
Since it's obviously bad to actually copy data, it seems acceptable to
capture enough information (like mod time) so the read operations later can
fail if underlying storage has been changed. It feels really strange if
reading the Blob can yield some data from one version of a file (or Canvas)
mixed with some data from newer version, without any indication that this is
happening.

All that means there is an option 3:

3. Treat all Blobs as 'snapshots' that refer to the range of underlying data
at the moment of creation of the Blob. Blobs produced further by
Blob.slice() operation inherit the captured state w/o actually verifying it
against 'live' underlying objects like files. All Blobs can be 'read' (or
'sent') via operations that can fail if the underlying content has changed.
Optionally, expose snapshotTime property and perhaps "read if not changed
since" parameter to read operations. Do not derive File from Blob, rather
have File.getAsBlob() that produces a Blob which is a snapshot of the file
at the moment of call. The advantage here is that it removes need for sync
operations from Blob and provides mechanism to ensure the changing
underlying storage is detectable. The disadvantage is a bit more complexity
and bigger change to File spec.


Re: File API: Blob and underlying file changes.

2010-01-20 Thread Eric Uhrhane
On Wed, Jan 20, 2010 at 1:45 PM, Dmitry Titov  wrote:
> So it seems there is 2 ideas on how to handle the underlying file changes in
> case of File and Blob objects, nicely captured by Arun above:
> 1. Keep all Blobs 'mutating', following the underlying file change. In
> particular, it means that Blob.size and similar properties may change from
> query to query, reflecting the current file state. In case the Blob was
> sliced and corresponding portion of the file does not exist anymore, it
> would be clamped, potentially to 0, as currently specified. Read operations
> would simply read the clamped portion. That would provide similar behavior
> of all Blobs regardless if they are the Files or obtained via slice(). It
> also has a slight disadvantage that every access to Blob.size or
> Blob.slice() will incur synchronous file I/O. Note that current
> File.fileSize is already implemented like that in FF and WebKit and uses
> sync file I/O.
> 2. Treat Blobs that are Files and Blobs that are produced by slice() as
> different blobs, semantically. While former ones would 'mutate' with the
> file on the disk (to keep compat with form submission), the later would
> simply 'inherit' the file information and never do sync IO. Instead, they
> would fail later during async read operations. This has disadvantage of Blob
> behaving differently in some cases, making it hard for web developers to
> produce correct code. The synchronous file IO would be reduced but not
> completely eliminated, because the Blobs that are Files would continue to
> 'sync' with the underlying file stats during sync JS calls. One benefit is
> that it allows detection of file content change, via checks of modification
> time captured when the first slice() operation is performed and verified
> during async read operations, which provides a way to implement reliable
> file operations in face of changing files, if the developer wants to spent
> an effort to do so.
>
> It seems folks on the thread do not like the duplicity of Blobs (hard to
> program and debug), and there is also a desire to avoid synchronous file IO.
> It seems the spec today leans more to the #1. The only problem with it is
> that it's hard to implement some scenarios, like a big file upload in chunks
> - in case the file changes, the result of upload may actually be a mix of
> new and old file contents and there is no way to check... Perhaps we can
> expose File.modificationTime? It still dos not get rid of sync I/O...

I think it could.  Here's a third option:

Make all blobs, file-based or not, just as async as the blobs in
option 2.  They never do sync IO, but could potentially fail future
read operations if their metadata is out of date [e.g. reading beyond
EOF].  However, expose the modification time on File via an async
method and allow the user to pass it in to a read call to enforce
"fail if changed since this time".  This keeps all file accesses
async, but still allows for chunked uploads without mixing files
accidentally.  If we allow users to refresh the modification time
asynchronously, it also allows for adding a file to a form, changing
the file on disk, and then uploading the new file.  The user would
look up the mod time when starting the upload, rather than when the
file's selected.

Eric

> Dmitry
> On Fri, Jan 15, 2010 at 12:10 PM, Dmitry Titov  wrote:
>>
>> On Fri, Jan 15, 2010 at 11:50 AM, Jonas Sicking  wrote:
>>>
>>> This doesn't address the problem that authors are unlikely to even
>>> attempt to deal with this situation, given how rare it is. And even
>>> less likely to deal with it successfully given how hard the situation
>>> is reproduce while testing.
>>
>> I don't know how rare the case is. It might become less rare if there is
>> an uploader of big movie files and it's easy to overwrite the big movie file
>> by hitting 'save' button in movie editor while it is still uploading...
>> Perhaps such uploader can use other means to detect the file change
>> though...
>> It would be nice to spell out some behavior though, or we can end up with
>> some incompatible implementations. Speaking about Blob.slice(), what is
>> recommended behavior of resultant Blobs on the underlying file change?
>>
>>
>>>
>>> / Jonas
>>
>
>



Re: File API: Blob and underlying file changes.

2010-01-20 Thread Dmitry Titov
So it seems there is 2 ideas on how to handle the underlying file changes in
case of File and Blob objects, nicely captured by Arun above:

1. Keep all Blobs 'mutating', following the underlying file change. In
particular, it means that Blob.size and similar properties may change from
query to query, reflecting the current file state. In case the Blob was
sliced and corresponding portion of the file does not exist anymore, it
would be clamped, potentially to 0, as currently specified. Read operations
would simply read the clamped portion. That would provide similar behavior
of all Blobs regardless if they are the Files or obtained via slice(). It
also has a slight disadvantage that every access to Blob.size or
Blob.slice() will incur synchronous file I/O. Note that current
File.fileSize is already implemented like that in FF and WebKit and uses
sync file I/O.

2. Treat Blobs that are Files and Blobs that are produced by slice() as
different blobs, semantically. While former ones would 'mutate' with the
file on the disk (to keep compat with form submission), the later would
simply 'inherit' the file information and never do sync IO. Instead, they
would fail later during async read operations. This has disadvantage of Blob
behaving differently in some cases, making it hard for web developers to
produce correct code. The synchronous file IO would be reduced but not
completely eliminated, because the Blobs that are Files would continue to
'sync' with the underlying file stats during sync JS calls. One benefit is
that it allows detection of file content change, via checks of modification
time captured when the first slice() operation is performed and verified
during async read operations, which provides a way to implement reliable
file operations in face of changing files, if the developer wants to spent
an effort to do so.

It seems folks on the thread do not like the duplicity of Blobs (hard to
program and debug), and there is also a desire to avoid synchronous file IO.
It seems the spec today leans more to the #1. The only problem with it is
that it's hard to implement some scenarios, like a big file upload in chunks
- in case the file changes, the result of upload may actually be a mix of
new and old file contents and there is no way to check... Perhaps we can
expose File.modificationTime? It still dos not get rid of sync I/O...

Dmitry

On Fri, Jan 15, 2010 at 12:10 PM, Dmitry Titov  wrote:

> On Fri, Jan 15, 2010 at 11:50 AM, Jonas Sicking  wrote:
>
>>
>> This doesn't address the problem that authors are unlikely to even
>> attempt to deal with this situation, given how rare it is. And even
>> less likely to deal with it successfully given how hard the situation
>> is reproduce while testing.
>
>
> I don't know how rare the case is. It might become less rare if there is an
> uploader of big movie files and it's easy to overwrite the big movie file by
> hitting 'save' button in movie editor while it is still uploading... Perhaps
> such uploader can use other means to detect the file change though...
>
> It would be nice to spell out *some* behavior though, or we can end up
> with some incompatible implementations. Speaking about Blob.slice(), what is
> recommended behavior of resultant Blobs on the underlying file change?
>
>
>
>
>> / Jonas
>>
>
>


[widgets] TWI test case errors

2010-01-20 Thread Scott Wilson

Hi Marcos & Dominique (and other widsters),

I've been implementing our semi-automated testing of Apache Wookie[1]  
for TWI[2], and have spotted a few errors in the test cases[3] you may  
want to fix:


za: URL of widget is incorrect; it should be 
http://dev.w3.org/2006/waf/widgets-api/test-suite/test-cases/ta-az/aa/aa.wgt

aq: OpenURL is called with the same nonsense string used in test "an",  
rather than the URL being tested for


an: Probably should show the gray "tested elsewhere" box as per "aq"  
rather than green, as you still have to check that another browser  
window didn't open


ao: URL of widget is incorrect; its actually at http://dev.w3.org/2006/waf/widgets-api/test-suite/test-cases/ta-pb/ao/ao.wgt 
 (so is the test assertion in the wrong location?)


-S

[1] http://incubator.apache.org/wookie/
[2] http://www.w3.org/TR/widgets-apis/
[3] http://dev.w3.org/2006/waf/widgets-api/test-suite/

smime.p7s
Description: S/MIME cryptographic signature


RE: Seeking pre-LCWD comments for Indexed Database API; deadline February 2

2010-01-20 Thread Adrian Bateman
At Microsoft, we don't believe the spec is quite ready for Last Call. Based on 
our prototyping work, we're preparing some additional feedback that we think is 
more substantive than would be appropriate for Last Call comments. I anticipate 
that we will be able to post this feedback to the working group next Monday 
(25th Jan).

From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
Behalf Of Jonas Sicking
Sent: Tuesday, January 19, 2010 9:48 PM
To: Maciej Stachowiak
Cc: Arthur Barstow; public-webapps; Jeremy Orlow
Subject: Re: Seeking pre-LCWD comments for Indexed Database API; deadline 
February 2


For what it's worth we are in the same situation at mozilla
On Jan 19, 2010 3:40 PM, "Maciej Stachowiak" 
mailto:m...@apple.com>> wrote:

On Jan 19, 2010, at 3:05 PM, Jeremy Orlow wrote: > On Tue, Jan 19, 2010 at 4:50 
AM, Arthur Barstow...
We at Apple are also in reviewing the spec and would also like additional time 
to review. It doesn't matter that much to us if the review time is before or 
during Last Call, but we definitely can't do a meaningful review by February 2, 
and therefore cannot really sign off by that date on whether the document has 
satisfied relevant technical requirements, is feature-complete, and has all 
issues resolved.

(As far as I can tell the document is less than 4 months old as an Editor's 
Draft and is about 60 pages long, so I hope it is reasonable to ask for some 
reasonable amount of review time.)

Regards,
Maciej



Re: [selectors-api] comments on Selectors API Level 2

2010-01-20 Thread Andrew Fedoniouk

Daniel Glazman wrote:


   I would recommend dropping the pseudo-class :scope and make a simpler
   model where a fictional :scope pseudo-class and a descendant
   combinator are prepended to all selectors passed as the argument of
   the corresponding APIs.



There are cases where you will need to match only immediate children 
using such queryScopedSelector() function.


Possible solutions:

element.$("> .child");
element.$(":root > .child");

:root here is the element itself - root of the lookup.

BTW: the name of the function queryScopedSelectorAll() has at least one 
technical and one grammatical error. Can we rename it somehow?



--
Andrew Fedoniouk.

http://terrainformatica.com



Re: [selectors-api] comments on Selectors API Level 2

2010-01-20 Thread Doug Schepers

Hi, folks-

Since the Selectors API is so closely tied to CSS Selectors, which may 
affect implementations and the development of the CSS specs, I would 
suggest that there be a closer working relationship between the editors 
of Selectors API and the CSS WG.  It's a bad sign of coordination to see 
emails from people on the CSS WG who are unpleasantly surprised by 
developments in the Selectors API spec.  This requires more than the 
usual inter-group review.


Please let me know how I can help facilitate this.

Regards-
-Doug Schepers
W3C Team Contact, SVG and WebApps WGs


Daniel Glazman wrote (on 1/20/10 2:50 AM):

Hi there.

(this message contains personal comments and does not represent an
official response from the CSS WG)

I have read the recent Selectors API Level 2 draft [1] and have a few
important comments to make:

1. I don't like the idea of refNodes. I think having the APIs specified
at Element level makes it confusing. I would recommend applying the
NodeSelector interface to NodeList instead. If queryScopedSelector()
and queryScopedSelectorAll() are applied to an Element or a NodeList,
the corresponding element(s) are the refNodes of the query.
Same comment for matchesSelector().

2. I am extremely puzzled by the parsing model of scoped selectors. In
particular, I think that the :scope pseudo-class introduces things
that go far beyond scoping. Let's consider the selector ":scope+p".
Clearly, it's _not_ scoped since it queries element that are outside
of the subtree the context element is the root of. Furthermore, these
elements can be queried without scopes, and I don't see why this is
needed at all!!!
I would recommend dropping the pseudo-class :scope and make a simpler
model where a fictional :scope pseudo-class and a descendant
combinator are prepended to all selectors passed as the argument of
the corresponding APIs.

I don't like the idea that implementors will have to check if the
first sequence of simple selectors in a selector contains or does
not contain a given pseudo-class to prepend something to the context.
This is clearly the kind of things I think we should avoid in
Selectors in general.

3. the section about :scope does not include error handling. What
happens if multiple :scope are present?

4. what's the specificity of that pseudo? Since it's proposed as a
regular and non-fictional pseudo, web authors _can_ use it in
regular stylesheets, even if it's meaningless outside of a scoped
stylesheet. What's the behaviour in that case? What's the
specificity?

[1] http://www.w3.org/TR/selectors-api2/


--
W3C CSS WG, Co-chair





Re: MPEG-U

2010-01-20 Thread Doug Schepers

Hi, Cyril-

Cyril Concolato wrote (on 1/20/10 12:24 AM):


Le 13/01/2010 19:59, Doug Schepers a écrit :


Cyril Concolato wrote (on 1/13/10 10:37 AM):


Yes, you're right, the problem is that liaisons usually are not
considered as public documents so the secretariat or MPEG members are
not allowed to make them public.
...
Anyway, MPEG is meeting next week, I'll
raise your questions and try to have MPEG make a formal answer.


Could you please make sure that the secretariat sends the email to
team-liais...@w3.org, CCing Steven, Mike, and me, as the Team Contacts
for the WebApps WG, and Philippe Le Hegaret as Interaction Domain Lead?
It's not appropriate to email Tim Berners-Lee for liaisons at this
level, though if they insist, I suppose they can include him. We need to
make sure that these liaisons are dealt with in a timely manner.

I would greatly appreciate if you could have the secretariat send an
immediate acknowledgment email to the above email addresses, just to
make sure that the process is understood and accepted, before sending
the liaison itself. Could you please request that right away?

I know you are doing what you can to make sure the communication
channels are clear, so I appreciate your help.


Just for clarification. I don't know yet if the liaison can be sent to a
public mailing list. But if it is possible, is it preferable to send it
directly to the public mailing list or to list of persons you mentioned?


My personal preference is that the technical discussion happen on 
public-webapps, of course, but that is a dialog that should be carried 
on by people like you, who are in both organizations.


Liaisons from MPEG (and other organizations) tend to be formal documents 
(usually in Word or PDF format), and require formal documents in return, 
so we need to address that in a separate channel, after the technical 
discussion has taken place.  If it is possible to have these liaisons 
sent to the public list as well as the people and lists I mention above, 
please do so; failing that, please ask them to send the liaisons to 
member-webapps as well.


If we need to draw up a Memorandum of Understanding regarding this, 
please let us know.  Thanks again.


Regards-
-Doug Schepers
W3C Team Contact, SVG and WebApps WGs



[widgets] Draft agenda for 21 January 2010 voice conf

2010-01-20 Thread Arthur Barstow
Below is the draft agenda for the January 21 Widgets Voice Conference  
(VC).


Inputs and discussion before the VC on all of the agenda topics via  
public-webapps is encouraged (as it can result in a shortened meeting).


Please address Open/Raised Issues and Open Actions before the meeting:

 http://www.w3.org/2008/webapps/track/products/8

Minutes from the last VC:

 http://www.w3.org/2010/01/07-wam-minutes.html

-Regards, Art Barstow

Logistics:

 Time: 22:00 Tokyo; 16:00 Helsinki; 15:00 Paris; 14:00 London; 09:00  
Boston; 06:00 Seattle

 Duration: 60 minutes max
 Zakim Bridge:+1.617.761.6200, +33.4.89.06.34.99 or +44.117.370.6152
 PIN: 9231 ("WAF1");
 IRC: channel = #wam; irc://irc.w3.org:6665 ; http://cgi.w3.org/ 
member-bin/irc/irc.cgi

 Confidentiality of minutes: Public

Agenda:

1. Review and tweak agenda

2. Announcements

3. Access Requests Policy (WARP) spec
 http://dev.w3.org/2006/waf/widgets-access/

a. Comments from Marcos; Dec 21:
 http://www.w3.org/mid/ 
b21a10670912210706w1d04c972j1c40236c0a864...@mail.gmail.com


b. Comments from Dom; Dec 10:
 http://www.w3.org/mid/1260460310.3355.2561.ca...@localhost

c. Extending access to local network resources; Stephen Jolly (14 Jan):
 http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/ 
0173.html



4. URI Scheme spec - LC comments
 http://dev.w3.org/cvsweb/2006/waf/widgets-uri/
 http://www.w3.org/2006/02/lc-comments-tracker/42538/WD-widgets- 
uri-20091008/doc/


a. Comments on Widget URI (General) by Larry Masinter 18-Dec-2009:
 http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/ 
1455.html


b. Authorities will never have authority?; Larry Masinter, Jonathan  
Rees, Robin:


 Larry (21-Dec):
 http://lists.w3.org/Archives/Public/www-tag/2009Dec/0112.html
 Robin (11-Jan):
 http://lists.w3.org/Archives/Public/www-tag/2010Jan/0042.html

 Jonathan (23-Dec):
 http://lists.w3.org/Archives/Public/www-tag/2009Dec/0119.html
 Robin (11-Jan):
 http://lists.w3.org/Archives/Public/www-tag/2010Jan/0041.html


5. View Modes Media Features spec:
 http://dev.w3.org/2006/waf/widgets-vmmf/

a. VMMF clarifications by Marcin (14-Jan):
 http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/ 
0170.html



6. AOB

a. Next call: No call on January 27; next call is Feb 4.





Re: A Method for Writing Testable Conformance Clauses and its Applications (Was Re: Write up of test assertion extraction methodology)

2010-01-20 Thread Scott Wilson

Hi Marcos,

I think this is a really good piece of work - I'll be pointing a few  
people from other spec orgs at the draft as its addressing a common  
requirement.


(As an implementer I found the approach - especially the  
implementation reports - really useful and easy to follow in practice.)


S

On 19 Jan 2010, at 15:49, Marcos Caceres wrote:


Hi all,
A draft of "A Method for Writing Testable Conformance Clauses and  
its Applications" in now available for review online [1]. For those  
that have not seen it, it basically just documents how we are  
standardizing the Widget specs and some basic QA processes:


http://dev.w3.org/2008/dev-ind-testing/extracting-test-assertions-pub.html

Please consider this a working draft, as it likely contains typos,  
and a couple of half-baked ideas, etc. Comments are, of course,  
welcomed. It is expected that this document will be published as a  
working group note at some point in the future.


Kind regards,
Marcos

Marcos Caceres wrote:



Dominique Hazael-Massieux wrote:

Hi Marcos,

Le mardi 05 janvier 2010 à 17:45 +0100, Dominique Hazael-Massieux a
écrit :

Le mardi 05 janvier 2010 à 17:44 +0100, Marcos Caceres a écrit :
I was literally doing an editorial pass right now. I would  
appreciate

another day or two to finish (and for you and the WG to have a
chance to
review the changes). If I check-in a draft by Thursday, could we  
aim to

publish next week?

Sure, sounds good to me. Thanks for your help on this!


Any news on your editing pass :) ?


Sorry, I'm still working on it... it's taking a little longer than I
first anticipated :( I've rewritten most of it to describe a bit more
clearly how the method was applied.






smime.p7s
Description: S/MIME cryptographic signature


Re: A Method for Writing Testable Conformance Clauses and its Applications (Was Re: Write up of test assertion extraction methodology)

2010-01-20 Thread Marcos Caceres



On Jan 20, 2010, at 11:31 AM, Dominique Hazael-Massieux   
wrote:



Hi Marcos,

Le mardi 19 janvier 2010 à 16:49 +0100, Marcos Caceres a écrit :

A draft of "A Method for Writing Testable Conformance Clauses and its
Applications" in now available for review online [1]. For those that
have not seen it, it basically just documents how we are  
standardizing

the Widget specs and some basic QA processes:
http://dev.w3.org/2008/dev-ind-testing/extracting-test-assertions-pub.html


Thanks for your thorough edits to the original document!

I’ve brought my own set of corrections [1] to the draft (including t 
he

errors and possible improvements pointed by Doug); the most visible
change I made was to replace “conformance clause” by “conforman 
ce
requirement” to be consistent with the usual vocabulary in that spac 
e.


Once Wilhelm and Dmitri have sent their reviews (and assuming they  
don’t
show any blockers), I’ll move with the requesting publication as Fir 
st

Public Working Group Note.

Dom (for MWI test suites ACTION-114
http://www.w3.org/2005/MWI/Tests/track/actions/114)

1.
http://dev.w3.org/cvsweb/2008/dev-ind-testing/extracting-test-assertions-pub.html.diff?r1=1.27&r2=1.51&f=h



Great! Changes look good. Thanks for integrating Dougs suggestions. 


Re: A Method for Writing Testable Conformance Clauses and its Applications (Was Re: Write up of test assertion extraction methodology)

2010-01-20 Thread Dominique Hazael-Massieux
Hi Marcos,

Le mardi 19 janvier 2010 à 16:49 +0100, Marcos Caceres a écrit :
> A draft of "A Method for Writing Testable Conformance Clauses and its 
> Applications" in now available for review online [1]. For those that 
> have not seen it, it basically just documents how we are standardizing 
> the Widget specs and some basic QA processes:
> http://dev.w3.org/2008/dev-ind-testing/extracting-test-assertions-pub.html

Thanks for your thorough edits to the original document!

I’ve brought my own set of corrections [1] to the draft (including the
errors and possible improvements pointed by Doug); the most visible
change I made was to replace “conformance clause” by “conformance
requirement” to be consistent with the usual vocabulary in that space.

Once Wilhelm and Dmitri have sent their reviews (and assuming they don’t
show any blockers), I’ll move with the requesting publication as First
Public Working Group Note.

Dom (for MWI test suites ACTION-114
http://www.w3.org/2005/MWI/Tests/track/actions/114)

1.
http://dev.w3.org/cvsweb/2008/dev-ind-testing/extracting-test-assertions-pub.html.diff?r1=1.27&r2=1.51&f=h