Re: [whatwg] RDFa is to structured data, like canvas is to bitmap and SVG is to vector

2009-02-03 Thread Giovanni Gentili
Calogero Alex Baldacchino wrote:
> It seems that you'd expect RDFa to be specced out before solving related
> problems (so to push their solution). I don't think that's the right path to
> follow, instead known issues must be solved before making a decision, so
> that the specification can tell exactly what developers must implement

I think that an help in defining of the requirements around
structured data, RDFa, metadata copy&paste, semantic links [1],etc
could came from the W3C document "Use Cases and Requirements
for Ontology and API for Media Object 1.0" [2]

Take the requirements listed from "r01" to "r13" and replace
the term "media objects" with "structured/linked data".

[1] http://lists.w3.org/Archives/Public/public-html/2009Jan/0082.html
[2] http://www.w3.org/TR/2009/WD-media-annot-reqs-20090119/#req-r01
-- 
Giovanni Gentili


Re: [whatwg] RDFa is to structured data, like canvas is to bitmap and SVG is to vector

2009-02-03 Thread Robert Sayre
RDFa should sink or swim on its own merits, and if RDFa requires
drastic changes to HTML, it is probably broken. Let the compelling
benefits of RDFa pave the way to implementations, and then standardize
based on experience with those.

RDFa should not be blessed by HTML, and the HTML spec should adopt a
similar stance to all new features. For example, I would be very
surprised to see Web Sockets fail on its own, since the benefits seem
clear. But I could be wrong, and it should face a survival test.

-- 

Robert Sayre

"I would have written a shorter letter, but I did not have the time."


Re: [whatwg] Adding resourceless media to document causes error event

2009-02-03 Thread Calogero Alex Baldacchino

Chris Pearce ha scritto:
I need to clarify something about the media load() algorithm [ 
http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#dom-media-load 
]


My reading of the spec is that if you have a media element with no src 
attribute or source element children (e.g. ) and you 
insert it into a document, then the media load() algorithm will be 
implicitly invoked, and because the list of potential media resources 
is empty, that algorithm will immediately fall through to the "failure 
step" (step 12), causing an error progress event to be dispatched to 
the media element.


My question is:

Is is really necessary to invoke the load algorithm when adding a 
media element with no src/sources to a document? Doing so just causes 
an error progress event dispatch, and we've not exactly failed to load 
anything, indeed, we've not even tried to load anything in this case.



Thanks,
Chris Pearce.


Maybe an attribute such as "enabled" (or "disabled") or "defer" or the 
alike could be helpful, so that any operations can be freezed/deferred 
whenever a DOM manipulation is needed, without caring of possible 
consequences (unless the video is being played, in which case I think 
that either manipulations should be avoided, and/or their effects 
ignored/deferred, or the playback should be explicitely stopped, e.g. by 
calling a stop() method, to force the immediate execution of a scheduled 
evaluation of new sources, eventually after the UA has asked the user 
for his permission -- I'd prefer this approach to any explicite 
invocation of the load() method, which could instead be a routine 
invoked by a task engine under certain conditions).


WBR, Alex


--
Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP 
autenticato? GRATIS solo con Email.it http://www.email.it/f

Sponsor:
Blu American Express: gratuita a vita! 
Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=8615&d=4-2


[whatwg] Adding resourceless media to document causes error event

2009-02-03 Thread Chris Pearce
I need to clarify something about the media load() algorithm [ 
http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#dom-media-load 
]


My reading of the spec is that if you have a media element with no src 
attribute or source element children (e.g. ) and you 
insert it into a document, then the media load() algorithm will be 
implicitly invoked, and because the list of potential media resources is 
empty, that algorithm will immediately fall through to the "failure 
step" (step 12), causing an error progress event to be dispatched to the 
media element.


My question is:

Is is really necessary to invoke the load algorithm when adding a media 
element with no src/sources to a document? Doing so just causes an error 
progress event dispatch, and we've not exactly failed to load anything, 
indeed, we've not even tried to load anything in this case.



Thanks,
Chris Pearce.


Re: [whatwg] RDFa is to structured data, like canvas is to bitmap and SVG is to vector

2009-02-03 Thread Calogero Alex Baldacchino

Shelley Powers ha scritto:



The point I'm making is that you set a precedent, and a good one I 
think: giving precedence to "not invented here". In other words, to 
not re-invent new ways of doing something, but to look for established 
processes, models, et al already in place, implemented, vetted, etc, 
that solve specific problems. Now that you have accepted a use case, 
Martin's, and we've established that RDFa solves the problem 
associated with the use case, the issue then becomes *is there another 
data model already as vetted, documented, implemented that would 
better solve the problem*.




RDF in a separate XML-syntax file, perhaps. Just because that use case 
raised a privacy concern on informations to keep private anyway, and 
that's not a problem solvable at the document level with metadata; 
instead, keeping relevant metadata in a separate file would help a 
better access control. Also, a separate file would have the relevant 
informations ready for use, while embedding them with other content 
would force a load and parsing of the other content in search of 
relevant metadata (possible, of course, and not much of a problem, but 
not as clean and efficient).


Moreover, it should be verified whether social-network service providers 
agree with such a requirement: I might avail of a compliant 
implementation to easily migrate from one service to another and leave 
the former, in which case why should a company open its inner 
infrastructure and database and invest resources for the benefit of a 
competitor accessing its data and consuming its bandwidth to catch its 
customers? (this is not the same interoperability issue for mail clients 
supporting different address book formats, minor vendors had to do that 
to improve their businness - and they didn't need to access a 
competitor's infrastructure).


Perhaps, that might work if personal infos and relationships were 
handled by an external service on the same lines of an OpenID service 
allowing an automated identification by other services; but this would 
reduce social networks to be a kind of front-end for such a centralized 
management (and service providers might not like that). Also, in this 
case anonimity should be ensured (for instance, I might have met you in 
two different networks, but knew your identity in only one of them, and 
you might wish that no one knew you're the person behind the other 
nickname; this is possible taking different informations in different 
databases and with different access rights, and should be replicable 
when merging such infos -- on the other hand, if you knew my identity, 
you should be allowed to "fill in the blanks" somehow).


Shelley Powers ha scritto:

Anne van Kesteren wrote:
On Sun, 18 Jan 2009 17:15:34 +0100, Shelley Powers 
 wrote:
And regardless of the fact that I jumped to conclusions about WhatWG 
membership, I do not believe I was inaccurate with the earlier part 
of this email. Sam started a new thread in the discussion about the 
issues of namespace and how, perhaps we could find a way to work the 
issues through with RDFa. My god, I use RDFa in my pages, and they 
load fine with any browser, including IE. I have to believe its 
incorporation into HTML5 is not the daunting effort that others make 
it seem to be.'


You ask us to take you seriously and consider your feedback, it would 
be nice if you took what e.g. Henri wrote seriously as well. 
Integrating a new feature in HTML is not a simple task, even if the 
new feature loads and renders fine in Internet Explorer.



Take you guys seriously...OK, yeah.

I don't doubt that the work will be challenging, or problematical. I'm 
not denying Henri's claim. And I didn't claim to be the one who would 
necessarily come up with the solutions, either, but that I would help 
in those instances that I could. 


It seems that you'd expect RDFa to be specced out before solving related 
problems (so to push their solution). I don't think that's the right 
path to follow, instead known issues must be solved before making a 
decision, so that the specification can tell exactly what developers 
must implement, eventually pointing out (after/while implementing) newer 
(hopefully minor) issues to be solved by refining the spec (which is a 
different task than specifying something known to be, let's say, "buggy" 
or uncertain).



Everything, as always, IMHO

WBR, Alex




--
Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP 
autenticato? GRATIS solo con Email.it http://www.email.it/f

Sponsor:
Blu American Express: gratuita a vita!
Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=8614&d=4-2


Re: [whatwg] embedding meta data for copy/paste usages - possible use case for RDF-in-HTML?

2009-02-03 Thread Calogero Alex Baldacchino

Hallvord R M Steen ha scritto:
  

HTML5 already contains elements that can be used to help obtain this
information, such as the ,  and it's associated heading 
to  and .  Obtaining author names might be a little more
difficult, though perhaps hCard might help.



Indeed. And it's not an either-or counter-suggestion to my proposal,
UAs could fall back to extracting such data if more structured meta
data is not available.

  


I think that's a counter-suggestion, instead. If UAs can gather enough 
informations from existing markup, they don't need to support further 
metadata processing; if authors can put enough informations in a page 
within existing markup (or markup being introduced in current 
specification), they don't need to learn and use additional metadata to 
repeat the same informations. It seems that any additional 
metadata-related markup would add complexity to UAs (requiring support) 
but not advantages (with respect to existing solutions) in this case.


Therefore, the question moves to the format to use to move such infos to 
the clipboard, which is a different concern than embedding metadata in a 
page. Also, different use cases should lead to different formats (with 
different kind of informations taken apart in different clipboard 
entries, or binded in a sort of multipart envelop to be serialized in 
just one entry), because a generic format, addressing a lot of use 
cases, could seem overengineered to developers dealing with a specific 
use case, thus a specific format could gain support in other 
applications more easily --- third parties developers could find easier 
and more consistent to get access to the right infos in the right 
format, either by looking for a specific entry (if supported by the OS), 
or by parsing a few headers in a multipart entry looking for an offset 
associated with a mime-type (which would work without requiring support 
by OS's, but an OS could provide facilities to directly access to a 
proper section anyway; however, any support for multiple kinds of infos 
should be in scope for the OS clipbord API and/or the UA, not for a 
specific application requiring specific data - and, given the above, 
that should not be in scope for HTML5).



If I copy the name of one of my Facebook "friends" and paste it into
my OS address book, it would be cool if the contact information was
imported automatically. Or maybe I pasted it in my webmail's address
book feature, and the same import operation happened..
  

I believe this problem is adequately addressed by the hCard microformat and
various browser extensions that are available for some browsers, like
Firefox.  The solution doesn't need to involve a copy and paste operation.
 It just needs a way to select contact info on the page and export it to an
address book.



This is way more complicated for most users. Your last sentence IMO is
not an appropriate way to use the word "just", seeing that you need to
find and invoke an "export" command, handle files, find and invoke an
"import" command and clear out the duplicated entries.. This is
impossible for several users I can think of, and even for techies like
us doing so repeatedly will eventually be a chore (even if we CAN, it
doesn't mean that's the way we SHOULD be working).
  


It can be improved, but it's the _best_ way to do that, and should be 
replicated in the "copy-and-paste" architecture you're proposing. 
Please, consider a basic usability principle says users should be able 
to understand what's going on basing on previous experience (that is, an 
interface have to be predictable); but users aren't confident with 
copying and pasting something different than text (in general), thus a 
UA should distinguish among a bare "copy" option, and more specific 
actions (such as "copy as quotation", "copy contact info", and so on), 
and related paste options (as needed), so that users can understand and 
choose what they want to do.


On the other hand, the same should happen in a recipient application, 
especially if providing support for different kinds of info; if either a 
UA or a recipient application (or both) provided a simple copy and a 
simple paste option (or fewer options than supported, basing on metadata 
or common markup) it could be confusing for users, nor should 
applications use metadata to choose what to do, because the user could 
just want to copy and paste some text (or do something else, but he 
knows what, so he must be free to choose it).


That is, what you're proposing is mainly addressed by moving around 
import/export features to put them into a context menu and making them 
work on a selection of text (not eliminating and substituting them with 
a "simpler" copy-paste architecture), then requiring support by other 
applications and eventually by the operative system, which is definetly 
out-of-scope for any web-related standards (we can constrain web-related 
applications to improve their interoperability with respect to 

Re: [whatwg] RDFa is to structured data, like canvas is to bitmap and SVG is to vector

2009-02-03 Thread Calogero Alex Baldacchino

Shelley Powers ha scritto:



The point I'm making is that you set a precedent, and a good one I 
think: giving precedence to "not invented here". In other words, to 
not re-invent new ways of doing something, but to look for established 
processes, models, et al already in place, implemented, vetted, etc, 
that solve specific problems. Now that you have accepted a use case, 
Martin's, and we've established that RDFa solves the problem 
associated with the use case, the issue then becomes *is there another 
data model already as vetted, documented, implemented that would 
better solve the problem*.




RDF in a separate XML-syntax file, perhaps. Just because that use case 
raised a privacy concern on informations to keep private anyway, and 
that's not a problem solvable at the document level with metadata; 
instead, keeping relevant metadata in a separate file would help a 
better access control. Also, a separate file would have the relevant 
informations ready for use, while embedding them with other content 
would force a load and parsing of the other content in search of 
relevant metadata (possible, of course, and not much of a problem, but 
not as clean and efficient).


Moreover, it should be verified whether social-network service providers 
agree with such a requirement: I might avail of a compliant 
implementation to easily migrate from one service to another and leave 
the former, in which case why should a company open its inner 
infrastructure and database and invest resources for the benefit of a 
competitor accessing its data and consuming its bandwidth to catch its 
customers? (this is not the same interoperability issue for mail clients 
supporting different address book formats, minor vendors had to do that 
to improve their businness - and they didn't need to access a 
competitor's infrastructure).


Perhaps, that might work if personal infos and relationships were 
handled by an external service on the same lines of an OpenID service 
allowing an automated identification by other services; but this would 
reduce social networks to be a kind of front-end for such a centralized 
management (and service providers might not like that). Also, in this 
case anonimity should be ensured (for instance, I might have met you in 
two different networks, but knew your identity in only one of them, and 
you might wish that no one knew you're the person behind the other 
nickname; this is possible taking different informations in different 
databases and with different access rights, and should be replicable 
when merging such infos -- on the other hand, if you knew my identity, 
you should be allowed to "fill in the blanks" somehow).


Shelley Powers ha scritto:

Anne van Kesteren wrote:
On Sun, 18 Jan 2009 17:15:34 +0100, Shelley Powers 
 wrote:
And regardless of the fact that I jumped to conclusions about WhatWG 
membership, I do not believe I was inaccurate with the earlier part 
of this email. Sam started a new thread in the discussion about the 
issues of namespace and how, perhaps we could find a way to work the 
issues through with RDFa. My god, I use RDFa in my pages, and they 
load fine with any browser, including IE. I have to believe its 
incorporation into HTML5 is not the daunting effort that others make 
it seem to be.'


You ask us to take you seriously and consider your feedback, it would 
be nice if you took what e.g. Henri wrote seriously as well. 
Integrating a new feature in HTML is not a simple task, even if the 
new feature loads and renders fine in Internet Explorer.



Take you guys seriously...OK, yeah.

I don't doubt that the work will be challenging, or problematical. I'm 
not denying Henri's claim. And I didn't claim to be the one who would 
necessarily come up with the solutions, either, but that I would help 
in those instances that I could. 


It seems that you'd expect RDFa to be specced out before solving related 
problems (so to push their solution). I don't think that's the right 
path to follow, instead known issues must be solved before making a 
decision, so that the specification can tell exactly what developers 
must implement, eventually pointing out (after/while implementing) newer 
(hopefully minor) issues to be solved by refining the spec (which is a 
different task than specifying something known to be, let's say, "buggy" 
or uncertain).



Everything, as always, IMHO

WBR, Alex




--
Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP 
autenticato? GRATIS solo con Email.it http://www.email.it/f

Sponsor:
Blu American Express: gratuita a vita! 
Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=8613&d=4-2


Re: [whatwg] What RDF does Re: Trying to work out...

2009-02-03 Thread Calogero Alex Baldacchino

Charles McCathieNevile ha scritto:
On Fri, 09 Jan 2009 12:54:08 +1100, Calogero Alex Baldacchino 
 wrote:


I admit I'm not very expert in RDF use, thus I have a few questions. 
Specifically, maybe I can guess the advantages when using the same 
(carefully modelled, and well-known) vocabulary/ies; but when two 
organizations develop their own vocabularies, similar yet different, 
to model the same kind of informations, is merging of data enough? 
Can a processor give more than a collection of triples, to be then 
interpreted basing on knowledge on the used vocabulary/ies?


RDF consists of several parts. One of the key parts explains how to 
make an RDF vocabulary self-describing in terms of other vocabularies.


 I mean, I assume my tools can extract RDF(a) data from whatever 
document, but my query interface is based on my own vocabulary: when 
I merge informations from an external vocabulary, do I need to 
translate one vocabulary to the other (or at least to modify the 
query backend, so that certain curies are recognized as representing 
the same concepts - e.g. to tell my software that 'foaf:name' and 
'ex:someone' are equivalent, for my purposes)? If so, merging data 
might be the minor part of the work I need to do, with respect to 
non-RDF(a) metadata (that is, I'd have tools to extract and merge 
data anyway, and once I translated external metadata to my format, I 
could use my own tools to merge data), specially if the same model is 
used both by mine and an external organization (therefore requiring 
an easier translation).


If a vocabulary is described, then you can do an automated translation 
from one RDF vocabulary to another by using your original query based 
in your original vocabulary. This is one of the strengths of RDF.




Certainly, this is a strong benefit. However, when comparing different 
vocabularies in depth to their basic description (if any), I guess there 
may be a chance to find vocabularies which are not described in terms of 
each other, or of a third common vocabulary, thus a translation might be 
needed anyway. This might be true for small-time users developing a 
vocabulary for internal use before starting an external partnership, or 
regardless of the partnership (sometimes, small-time users may find it 
easier/faster to "reinvent the wheel" and modify it to address evolving 
problems; potentially someone might be unable to afford an extensive 
investigation to find an existing vocabulary fulfilling his requirments, 
or to develope a new one in conjunction with a partner having similar 
but slightly different needs, and thus potentially leading to a longer 
process to mediate respective needs. In such a case, I wouldn't expect 
that such a person will look for existing, more generic vocabularies 
which can describe the new one in order to ensure the widest possible 
interchange of data - that is, until a requirement for interchange 
arises, designing a vocabulary for that might be an overengineered task, 
and once the requirement was met, addressing it with a translation or 
with a description in term of a vocabulary known to be involved (each 
time the problem recurres) might be easier/faster than engineering a 
good description once and for all).


Anyway, let's assume we're going to deal with well-described 
vocabularies. Is the automated translation a task of a parser/processor 
creating a graph of triples, or a task of a query backend? And what are 
the requirements for a UA, from this perspective? Must it just parse the 
triples and create a graph or also take care of a vocabulary 
description? Must it be a complete query backend? Must it also provide a 
query interface? How much basic or advanced must the interface be? I 
think we should answer questions like this, and try and figure out 
possible problems arising with each answer and possible related 
solutions, because the concern here should be what UAs must do with RDF 
embedded in a non-RDF (and non-XML) document.


 Thus, I'm thinking the most valuable benefit of using RDF/RDFa is 
the sureness that both parties are using the very same data model, 
despite the possible use of different vocabularies -- it seems to me 
that the concept of triples consisting of a subject, a predicate and 
an object is somehow similar to a many-to-many association in a 
database, whereas one might prefer a one-to-many approach - though, 
the former might be a natural choice to model data which are usually 
sparse, as in a document prose.


I don't see the ananlogy, but yes, I think the big benefit is being 
able to ensure that you know the data model without knowing the 
vocabulary a priori - since this is sufficient to automate the process 
of merging data into your model.




I understand the benefit with respect to well-known and/or 
well-described vocabularies, but I wonder if an average small-time user 
would produce a well-described or a very-custom vocabulary. In the 
latter case, a good knowledge of a foreing vocab

Re: [whatwg] Trying to work out the problems solved by RDFa

2009-02-03 Thread Calogero Alex Baldacchino

Toby A Inkster ha scritto:


Another reason the Microformat experience suggests new attributes are 
needed for semantics is the overloading of an attribute (class) 
previously mainly used for private convention so that it is now used 
for public consumption.


Maybe this is true, but, personally, I prefere this approach to the 
addition of new features/attributes/elements to an official 
specification without a clear support requirement for UAs beside just 
parsing. A similar (if not stronger) argument may be raised against the 
reuse of the content attribute in the context of RDFa, which I think has 
caused a significant change with respect to its original semantics (now 
it should be shared by every element, originally it was a  
specific attribute; now it should be part of an RDF _triple_, in origin 
it was - and is still - part of a _pair_ when used in conjunction with 
the "name" attribute, and constitutes a pragma directive in conjunction 
with the "http-equiv" attribute, which is somehow closer to an XML 
processing instruction than to an RDF triple - the same applies to a 
 with rel="stylesheet", for instance).


Yes, in real life, there are pages that use class="vcard" for things 
other than encoding hCard. (They mostly use it for linking to VCF 
files.) Incredibly, I've even come across pages that use class="vcard" 
for non-hCard uses, *and* hCard - yes, on the same page! As the 
Microformat/POSHformat space becomes more crowded, accidental 
collisions in class names become ever more likely.




Indeed, that's a possible source of troubles. I think that's the same if 
people misused prefixes, e.g. if after merging some content from 
different documents they got a different namespace binded to a 
previously declared prefix in a scope where both namespaces are involved 
(in an xhtml document). Also, a custom script may distinguish between 
different uses of "vcard" by the mean of a further, private classname, 
or by enveloping elements in containers (divs) with proper ids, which 
may be a good solution in some cases, and not in other ones; a more 
generic parser, being specialized by design, has a chance to recognize a 
correct structure for a given format and to discard wrong informations, 
which may work fine in some cases, but not in others. As always, each 
choice has its own downsides, and what counts is the costs/benefits 
ratio; it seems that any solution not requiring to be supported has the 
lowest costs for UA implementors.


I do not doubt xml extensibility (which effectively is the base of 
curies) has its own benefits, it's flexible and suitable for a quick 
developement of custom solutions, but it's also got its own downsides, 
such as leading to a possible heavy fragmentation, being difficoult to 
understand and use for many people (who are usually fooled by the 
concept of namespaces) and thus potentially causing misuses and errors. 
It doesn't seems that xml extensibility brought more benefits than 
costs, and a proof can lay in the majority of the web not having 
followed the envisioned xml-alike evolution.


Anyway, I'm not strongly against RDFa in HTML, instead, I can be quite 
neutral (I'd live with it); I'm not convinced it is worth to add it to 
the spec at this stage and until it would be possible to establish what 
UAs must do with them beside parsing (and how to deal with namespaces 
while parsing). Also, I'm not fully convinced by the need to embed 
metadata in a page and keep them in sync with that page. For instance, 
it require that every page reporting the same informations must 
duplicate the same metadata structure, and this doesn't grant that those 
informations, in first place, are in sync with real world (some pages 
might be out-of-date, others might be up-to-date). Instead, a separate 
file containing metadata to be linked when appropriate might solve both 
the problems: it doesn't require duplicates and can have a somewhat 
versioning to keep trace of changes and to present updated 
machine-friendly information to help users visiting an outdated page 
(assuming users can trust those metadata). Of course, this solution has 
its own downsides too.


WBR, Alex



--
Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP 
autenticato? GRATIS solo con Email.it http://www.email.it/f

Sponsor:
Blu American Express: gratuita a vita! 
Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=8615&d=4-2


Re: [whatwg] Trying to work out the problems solved by RDFa

2009-02-03 Thread Calogero Alex Baldacchino

Benjamin Hawkes-Lewis ha scritto:

On 12/1/09 20:26, Calogero Alex Baldacchino wrote:

I just mean that, as far as I know, there is no official standard
requiring UAs to support (parse and expose through the DOM) attributes
and elements which are not part of the HTML language but are found in
text/html documents.


Perhaps, but then prior to HTML5, much of what practical user agents 
must do with HTML has not been required by any official standard. ;)


RFC 2854 does say that "Due to the long and distributed development of 
HTML, current practice on the Internet includes a wide variety of HTML 
variants. Implementors of text/html interpreters must be prepared to 
be 'bug-compatible' with popular browsers in order to work with many 
HTML documents available the Internet."


http://tools.ietf.org/html/rfc2854

HTML 4.01 does recommend that "[i]f a user agent encounters an element 
it does not recognize, it should try to render the element's content" 
and "[i]f a user agent encounters an attribute it does not recognize, 
it should ignore the entire attribute specification (i.e., the 
attribute and its value)".


http://www.w3.org/TR/html401/appendix/notes.html#h-B.3.2

Clearly these suggestions are incompatible with respect to attributes; 
AFAIK all popular UAs insert unrecognized attributes into the DOM and 
plenty of web content depends on that behaviour.




Very, very true. HTML 4.01 also says the recommended behaviours are ment 
"to facilitate experimentation and interoperability between 
implementations of various versions of HTML", whereas the "specification 
does not define how conforming user agents handle general error 
conditions, including how user agents behave when they encounter 
elements, attributes, attribute values, or entities not specified in 
this document", and since "user agents may vary in how they handle error 
conditions, authors and users must not rely on specific error recovery 
behavior". I just think the last sentence defines a best practice 
everyone should follow instead of relying on a common quirk supporting 
invalid markup. However, beside something being a good or bad practice, 
there will always be authors doing whatever they please, therefore it is 
quite safe to assume UAs will always expose invalid/unrecognized 
attributes (that's unavoidable, given the need for backward compatibility).




Just like proprietary elements/attributes introduced with user agent 
behaviours (marquee, autocomplete, canvas), scripted uses of "data-*" 
might suggest new features to be added to HTML, which would then 
become requirements for UAs.


But unlike proprietary elements/attributes introduced with user agent 
behaviors, scripted uses of "data-*" do not impose new processing 
requirements on UAs.


Therefore, unlike proprietary elements/attributes introduced with user 
agent behaviors, scripted uses of "data-*" impose _no_ design 
constraints on new features.


Establishing user agent behaviours with "data-*" attributes, on the 
other hand, imposes almost as many design constraints as establishing 
them with proprietary elements and attributes. (There's just less 
pollution of the primary HTML "namespace".)


If no RDFa was in deployment, you could argue it would be less wrong 
(from this perspective) to abuse "data-*" than introduce new attributes.


Oh, well, I don't want to argue about that. For me the idea to use 
"data-rdfa-*" can rest in peace, since in practice it's not different 
from using RDFa attributes as they are, at least as far as they're 
handled by scripts, either client- or server-side. However I think that,


* actually it seems not to be enough clear what UAs not involved in a 
particular project should do with RDFa attributes, beside exposing their 
content for the purpose of a script elaboration, whereas a precise 
behaviour should be defined, as well as an eventual class of UAs clearly 
identified as not required to support it, and eventual caveats on 
possible problems and relative solutions, before introducing any new 
elements/attributes in a formal specification;


* actual deployment might be harmed by the use of xml namespaces in html 
serialization.


Also, I see design suggestions more than impositions. If a new (and 
proprietary/private) attribute/element/convention is convincingly 
useful/needed, it is supported by other UAs and introduced in a 
specification, otherwise, if a not enough significant number of pages 
would be broken, it might even be redefined for use with a different 
semantics. And a possible process involving data-* attributes 
would/could be experiment privately => extend the scale involving other 
people finding it useful for their needs => get it in the primary 
namespace of an official specification (discarding the "data-" part and 
any other useless parts of the experimental name), so that existing 
pages may still work with their custom scripts or easily migrate to the 
new standard (and benefit of the new default support) by run

Re: [whatwg] Adding and removing media source elements

2009-02-03 Thread Calogero Alex Baldacchino

Philip Jägenstedt ha scritto:

On Tue, 03 Feb 2009 05:44:07 +0100, Ian Hickson  wrote:


On Tue, 3 Feb 2009, Chris Pearce wrote:


(2) Why don't we invoke load() whenever a media element's src attribute
or  children are changed, regardless of networkState? That way
changes to the media's src/source other than the first change would 
have

the same effect as first change, i.e. they'd have an immediate effect,
causing load() to be invoked.


Doing this would cause the first file to be downloaded multiple times 
in a

row, leading to excessive network usage.



Surely this can't be the only reason? User agents are free to 
speculatively keep the current source loading when src/source changes 
and to stop loading it only if the "current media resource" does 
change. That, and caching, should be enough.


I have always imagined that the reason for the conditioned load() is 
to not interrupt playback by fiddling with the DOM or doing something 
like *v.src=v.src* (although I'm quite sure that doesn't count as 
changing the attribute). However, now I can't convince myself that 
this makes any sense, since surely if you change src/source you 
actually do want to change the effective source (and load() is 
scheduled to run after the current script, so there's no risk of it 
being run too early).


Doing the same with a script element can cause the script to be 
re-fetched and re-executed on some browsers, so I think there is a 
concrete chance to find the same behaviour for videos, and the spec have 
to say when the load is allowed (or, at least, when it should not 
happen). I'm not sure that every changes to the effective source should 
take place, for instance, changing it (through the dom) after playback 
has already started might not be very usable and should be avoided, 
therefore, any such attempt should be ignored/aborted (eventually with 
an exception) after playback start and until its end or an explicit stop 
(by the user or by a script, so to encourage programmers to check the 
state of the playback before taking any action).


Also, scheduling the load "after the current script" could not solve the 
whole problem: any changes to the video may happen through an event 
handler, therefore by different scripts, thus I think that it could be 
helpful to allow a script to freeze (or revert) ongoing operations (as 
well as the video interface) but playback (if yet started), so to (try 
and) ensure (somehow) that any dynamic changes can be performed without 
bothering the user, or are disallowed otherwise.


(what for? I'm considering the (maybe edge) case of a dynamic update of 
a video source, for instance when a different/better source (higher 
quality or with a more appropriate translation) is available, or for any 
other reason (e.g. the complete list of available sources might be 
streamed as a sequence of remote events for an immediate update and a 
deferred/repeated playback), but if the current source is being played 
it might not make sense to stop it and change it with a different one, 
eventually restarting from the beginning, because it may be annoying for 
users).




Related, since load() is async it depends on timing whether or not

   
   
v = document.getElementById('v');
v.src = 'test';
   

causes the source 'test' to be loaded, as the network state may not be 
NETWORK_EMPTY when the src attribute is set. The same goes for adding 
source child elements of course. Yes, this is the same issue as 
http://lists.w3.org/Archives/Public/public-html/2009Jan/0103.html and 
would be resolved by calling load() unconditionally.


Or checking the network state to choose if it's the case to call load() 
explicitely; however, due to its asynchronous nature, that might cause a 
double invocation (depending on implementations), or similar problems. 
Perhaps, the load() method should leave the network state unchanged 
(NETWORK_EMPTY in this case) or revert it to a previous value whenever 
the method fails to choose a candidate (e.g. because there is no 
valid/new source, a yet chosen source is being played and cannot be 
changed before it's stopped, and so on), and successive changes could be 
scheduled for an evaluation as soon as possible (e.g. as soon as the 
network state returns to be NETWORK_EMPTY, or becomes NETWORK_LOADED 
and/or the playback ended or has been stopped - if appropriate in this 
case), possibly being collapsed into a single task.


This way, a load evaluation preceeding the script execution, in your 
example, would fail and revert the network state to be empty, triggering 
a new invocation after the script has been executed; an evaluation 
following the script would work as expected; an evaluation invoked while 
the script is executing would cause the new v.src value to be scheduled 
for a later check, (the overall mechanism would result in an 
unconditioned scheduling of conditioned load() invocations, collapsed 
into one single entry until a call to .load() is made, which I t

Re: [whatwg] 5.12.3.7 Link type "help"

2009-02-03 Thread Giovanni Campagna
 elements are always hyperlinks,  can be hyperlinks or
resources. So it only makes sense for the  element to be
explicitly marked like this.

Giovanni

2009/2/3 Philipp Kempgen :
> ---cut---
> 5.12.3.7 Link type "help"
> The help keyword may be used with link, a, and area  elements.
> For link elements, it creates a hyperlink.
> ---cut---
>
> Shouldn't that read "For *a* elements, it creates a hyperlink."?
>
>
>   Philipp Kempgen
>
> --
> AMOOCON 2009, May 4-5, Rostock / Germany   ->  http://www.amoocon.de
> Asterisk: http://the-asterisk-book.com - http://das-asterisk-buch.de
> AMOOMA GmbH - Bachstr. 126 - 56566 Neuwied  ->  http://www.amooma.de
> Geschäftsführer: Stefan Wintermeyer, Handelsregister: Neuwied B14998
> --
>


Re: [whatwg] Adding and removing media source elements

2009-02-03 Thread Philip Jägenstedt

On Tue, 03 Feb 2009 05:44:07 +0100, Ian Hickson  wrote:


On Tue, 3 Feb 2009, Chris Pearce wrote:


(2) Why don't we invoke load() whenever a media element's src attribute
or  children are changed, regardless of networkState? That way
changes to the media's src/source other than the first change would have
the same effect as first change, i.e. they'd have an immediate effect,
causing load() to be invoked.


Doing this would cause the first file to be downloaded multiple times in  
a

row, leading to excessive network usage.



Surely this can't be the only reason? User agents are free to  
speculatively keep the current source loading when src/source changes and  
to stop loading it only if the "current media resource" does change. That,  
and caching, should be enough.


I have always imagined that the reason for the conditioned load() is to  
not interrupt playback by fiddling with the DOM or doing something like  
v.src=v.src (although I'm quite sure that doesn't count as changing the  
attribute). However, now I can't convince myself that this makes any  
sense, since surely if you change src/source you actually do want to  
change the effective source (and load() is scheduled to run after the  
current script, so there's no risk of it being run too early).


Related, since load() is async it depends on timing whether or not

   
   
v = document.getElementById('v');
v.src = 'test';
   

causes the source 'test' to be loaded, as the network state may not be  
NETWORK_EMPTY when the src attribute is set. The same goes for adding  
source child elements of course. Yes, this is the same issue as  
http://lists.w3.org/Archives/Public/public-html/2009Jan/0103.html and  
would be resolved by calling load() unconditionally.


--
Philip Jägenstedt
Opera Software