Re: [whatwg] IRC and WWW integration proposal

2013-02-01 Thread Odin Hørthe Omdal
On Fri, 01 Feb 2013 07:41:59 +0100, Nils Dagsson Moskopp  
 wrote:



Shane Allen  schrieb am Thu, 31 Jan 2013 23:40:11
-0600:


> A protocol attribute for  elements would be totally hilarious.
Not if the device is a tablet, or a phone running a browser that
supports it. Need support from a page/article or even a project? Hit
a button, and if the protocol is implemented, you're in the IRC
channel able to garnish that support instantly.

We probably misunderstood each other. Protocols are mentioned at the
beginning of a URL; having a protocol attribute on a  element
would therefore be redundant.


To illustrate:

  

Instead of:

  

Or a more fitting example of how it could be used:

  http://whatwg.org"; protocol="http">WHATWG

But, what happens now?

  http://whatwg.org"; protocol="mailto">WHATWG



Hilarious :-)

--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Features for responsive Web design

2012-10-15 Thread Odin Hørthe Omdal

On Thu, 11 Oct 2012 20:07:04 +0200, Markus Ernst  wrote:

This is why I'd humbly suggest to put information on the image in  
@srcset rather than info on the device and media. Such as:

 srcset="low.jpg 200w, hi.jpg 400w, huge.jpg 800w"


What about an image gallery, when you have 25 thumbnails on one page?  I'm  
not sure how this will work in cases where you don't want the image to be  
the "max size" your screen can handle.


Even the common case of having an article picture that is not 100% of the  
screen width will be hard to do in a responsive non-fluid way with  
predefined breakpoints.


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] input type=barcode?

2012-08-31 Thread Odin Hørthe Omdal
On Fri, 31 Aug 2012 12:26:23 +0200, Alexandre Morgaut  
 wrote:


Working in Web applications solutions, I would really love such input  
type or mode


Thing is, we don't need this on the web platform until some user agent  
actually supports giving input by a "reader" (don't even have to restrict  
it to barcode, it can be whatever).


To the website it will in fact just look like a very fast user typed in  
the data (or pasted it in). So this is possible to do already.


 makes very little sense imho, because what you want  
to input is always a number or some text. *How* you input that data  
shouldn't be any of the web site's business.


If I want to type in the number on the barcode manually instead of  
scanning it, I should be allowed to do that. If the royal mail service in  
Norway haven't set inputmode to barcode/reader on their   
box for inputting tracking code, the user agent is still *totally* able  
(and allowed) to push in data from a barcode reader there still.



Quite frankly, there is no need to spec stuff like this now, as long as  
it's not being used at this time. Why hint about something that doesn't  
exist in user agents?


Right now we don't have _any_ inputmode hints at all. It's still possible  
to provide inputs to  fields from voice-to-text, from keyboard,  
from touchscreens with on-screen keyboards, pre-filling etc.





Basically, there is no need to say anything about a barcode reader, if you  
have one in your user agent, just use it... It will work already.


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Proposal for HTML5: Virtual Tours

2012-08-31 Thread Odin Hørthe Omdal

On Fri, 31 Aug 2012 11:55:31 +0200, Leo Willner 
wrote:

To throw in my 5 cents:
If tour is just needed for panos we could do a -tag, for that we  
need the distorted 360°-pano-image and do a rendering of it in the  
browser into a 360°-pano.


But it's *really* not needed. This is such a small use case, and is doable
already with canvas or other web features.

Advanced tours? WebGL.
Less advanced ones? Canvas, - or just images, css, transitions and
animations.

This is really something you can easily build with existing technologies.

--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Features for responsive Web design

2012-08-10 Thread Odin Hørthe Omdal
On Thu, 09 Aug 2012 18:54:10 +0200, Kornel Lesiński   
wrote:


One stylesheet can be easily reused for   pixel-perfect 1x/2x layout,  
but pixel-perfect 1.5x requires its own sizes incompatible with 1x/2x.



Apart from it possibly being a self-fulfilling prophecy – isn't this
too much premature “optimization” ?


I think we can safely assume that authors will always want to prepare as  
few assets and stylesheets as they can, and will prefer integer units to  
fractional ones (1px line vs 1.px line).


I don't see the big problem, I think the spec is fine here. Yes it allows  
for putting a float there, but authors won't use it, so what's the  
problem? The spec already say you should use the number to calculate the  
correct intrinsic size, and the implementation will know what to do with a  
float number there if someone finds an actual use for it.


This isn't limiting it for the sake of making anything easier, it's not  
like "the x is an integer" is any easier than "the x is a float". And if  
you *do* somehow find a good use for it down the line (and I believe there  
might be, maybe 0.5x) it'll be there and work. No harm. :)


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Missing alt attribute name bikeshedding (was Re: alt="" and the exception)

2012-08-06 Thread Odin Hørthe Omdal
On Mon, 06 Aug 2012 02:31:00 +0200, Maciej Stachowiak   
wrote:
I don't have a strong opinion, but I think  
generator-unable-to-provide-required-alt might be long to the point of  
silliness.


IMHO generator-unable-to-provide-required-alt in all its ugliness is a  
really nice feature, because how would anyone in their sane mind write  
that. It's really made for a corner case, and if you really really want  
that, you should be prepared to deal with the ugliness, because what you  
are doing is ugly in the first place...



It clearly describes what's going on, in clear text that even those whose  
mother language is not English can easily understand. It discourages usage  
by being ugly and long. The negative reaction you had here is more or less  
what I believe the name is designed to provoke.


You don't use it unless you have to.


It feels humiliating to bikeshed a name, but this is also about what  
effect you want to put onto the ones using it.


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] frame accuracy breaking case for 25fps / status of 29.97fps

2012-07-09 Thread Odin Hørthe Omdal

On Mon, 09 Jul 2012 18:46:20 +0200, adam k  wrote:

i have a 25fps video, h264, with a burned in timecode.  it seems to be  
off by 1 frame when i compare the burned in timecode to the calculated  
timecode.  i'm using rob coenen's test app at  
http://www.massive-interactive.nl/html5_video/smpte_test_universal.html  
to load my own video.


what's the process here to report issues?  please let me know whatever  
formal or informal steps are required and i'll gladly follow them.


Well, it works beautifully in that web site you reference. What do you  
think is wrong actually? I'm not so sure if the spec is the first and best  
way to go to find the error(?)


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] [JavaScript / Web ECMAScript] Dropping the “escaped reserved words” compatibility requirement

2012-07-09 Thread Odin Hørthe Omdal

On Thu, 05 Jul 2012 10:47:46 +0200, Mathias Bynens  wrote:

Has the time come to drop this compatibility requirement?


Looks like a good time, if there truly has been no compat problems with  
doing the change Mozilla did. We'll align and try it out.


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Problems with width/height descriptors in srcset

2012-05-16 Thread Odin Hørthe Omdal

Jeremy Keith  wrote:
If I'm taking a "Mobile First" approach to development, then srcset will  
meet my needs *if* Nw and Nh refer to min-width and min-height.


In this example, I'll just use Nw to keep things simple:



(Expected behaviour: use small.png unless the viewport is wider than 600  
pixels, in which case use medium.png unless the viewport is wider than  
800 pixels, in which case use large.png).


Or you can do it "Desktop First" with the same behaviour:

  


the 0 in srcset would have to override the implict 0 that large got in
src. But that should be easy.

If, on the other hand, Nw and Nh refer to max-width and max-height, I  
*have to* take a "Desktop First" approach:




(Expected behaviour: use large.png unless the viewport is narrower than  
800 pixels, in which case use narrow.png unless the viewport is narrower  
than 600 pixels, in which case use small.png).


Likewise:

  

I really admit that the 92000w looks really ugly. And if you have a
viewport that is wider than 92,000 px it will choose the small.png.

Maybe it should have a Infinite marker for that case. Can't think of a
beautiful solution there.

One of the advantages of media queries is that, because they support  
both min- and max- width, they can be used in either use-case: "Mobile  
First" or "Desktop First".


Because the srcset syntax will support *either* min- or max- width (but  
not both), it will therefore favour one case at the expense of the  
either.


But making them bigger and more verbose will make them that for every
single element you're adding in @srcset. Hardly something to sing about.

  

vs

  

The other problem with my straw man proposal is that it kinda looks like
CSS, but it isn't ("max-width: 200" wouldn't work e.g.) so authors would
get that wrong much more often.

And... Okay, my straw man is just horrible. It'd be better if someone
could come up with a real alternate proposal though? I kinda like the
syntax in the spec draft, it's short and sweet. And obvious when you
know.

People will learn it from good documentation probable (heh) or hopefully
quickly understand once they have browsers to test in. It might be hard
learning all this from a spec and scattered emails - that's what spec
people are used to, but not so with webdevs. Dev Opera and MDN would
probably have good texts on it where people actually look stuff up.

Both use-cases are valid. Personally, I happen to use the "Mobile First"  
approach, but that doesn't mean that other developers shouldn't be able  
to take a "Desktop First" approach if they want. By the same logic, I  
don't much like the idea of srcset forcing me to take a "Desktop First"  
approach.


I am sympathetic to the idea, but right now I don't know how to.

If we can't find something that's preferable (concise, easy to read etc)
and can do both - I would however prefer that it put most weight on how
the image tag is mostly used on the web today. And allowing sites to
mobile enable their images by adding smaller images to srcset.

--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Responsive images and printing/zooming

2012-05-16 Thread Odin Hørthe Omdal

Markus Ernst  wrote:
I read the current spec and huge parts of today's discussions to find  
out how images with multiple sources are intended to behave when  
printed, or when the page is zoomed, but I found no hints. I think some  
words on this might be useful in the spec, regardless of what the final  
syntax will be.


Both issues you are highlighting is in the domain of competing browsers
to implement in the best way for their users/device/intended use case.
They should not, IMHO, be part of the spec. I'll tell you how I see it:


1. Print
When a page is printed (or also converted to PDF or whatever), both  
"viewport" width and pixel ratio change. Are UAs expected to load the  
appropriate sources then? This could result in increased bandwidth,  
delayed printing, and IMHO a disturbed user experience, as the image may  
differ from the one seen on screen. Thus, I suggest to always use the  
resource actually shown on screen for printing.


They are not _required_ to do anything when those change.

The spec draft does have an algorithm for updating img elements though:


The user agent may at any time run the following algorithm to update
an img element's image in order to react to changes in the
environment. (User agents are not required to ever run this algorithm.


<http://www.whatwg.org/specs/web-apps/current-work/multipage/embedded-content-1.html#processing-the-image-candidates>
(Scoll way past the first algorithm, and you'll get to that algorithm)

So it's up to the user agent to decide what to do in that case. I think
it would be very nice to substitute the picture, if you can get one with
a higher resolution. But it might even use the one it has in the
preview, and download the big image while the user is using his time
pressing options etc.

Then, when the user agent has gotten the new image, it can dissolve the
preview to use the new higher-res one - and also decide to use that one
for the print.

If it can't make the deadline (user hitting print), it can just use the
one it has. Or another user agent that likes to nag its users might
decide to nag the user.

This is a place for quality of implementation (QoI), where browsers can
compete on providing the best experience. I like it.


2. Zoom
On mobile devices, web pages are often zoomed out to fit the viewport  
width by default, the user is supposed to manually zoom in and scroll in  
order to read parts of pages. I understand that the whole thing about  
responsive design is to make this kind of zooming unnecessary, but in  
practice there will be all kinds of partly responsive designs using  
responsive images.
Specially in cases where separate sources are given to match device  
pixel densities, zooming might matter, as for a zoomed-out page the low  
res image might be more than sufficient, but after zooming in the higher  
resolution might be appropriate. Which OTOH can disturb the user  
experience, when the images differ.


Yes, but you might get a quicker page load if you do the smallest one
first. You can load that one and then when you're done with all the
network activity, you can start a new job to download a bigger version
in the background.

You can substitute that one when you have it (or when the user zooms).


Or the browser may choose to just load the one it wants for zoom
straight away.

It's decideable! If a browser does something that's ugly, the users of
that browser will just have to bear with it, or switch to another one
(hehe ;-) ).


These are not things at a user experience level, that need to be
interopable.
--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] So if media-queries aren't for determining the media to be used what are they for?

2012-05-16 Thread Odin Hørthe Omdal

Oh, please do quote what you are answering. It's very hard to follow
such a conversation like this.

Matthew Wilcox  wrote:

If there was a way to do this in JS, we'd have found it. Every time we
run up against the pre-fetch problem. In fact, it is only the
pre-fetch problem that causes responsive images to be an issue at all.
It'd be trivial to fix with JS otherwise.


I could be more clear. I believe this is what you are talking about:

I said:

media queries is doing model 2. I suggest we find a way to do that with
javascript. Maybe some form of deferring image loading at all, saying
that "I will fetch image on my own". Then you can do the delayed image
loading that would need to happen in a media query world.


When I say find a way to defer it, I mean spec a way to do it, and
implement that. Something like:



I understand the problem :-)


Also, i don't think non-pixel based layouts can be easily dismissed.
It's where the whole movement is going and already pixel based MQ are
described as limited and not a best practice.


... But it doesn't work. Please read my emails, and come with
constructive technical feedback on why you think it *can* in fact work.
I can not see a method where that would work in an non-broken way.

Technical problems won't just magically go away by not acknowlegding
them.


And I did find a way forward for the model 2, make a way to defer the
image load and find a way to load it. Maybe  element should
always defer? It actually *has to* because it uses media queries, so in
fact,  might be a solution for model 2 in the future.

But @srcset is solving the other part of the equation (model 1).
--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Features for responsive Web design

2012-05-16 Thread Odin Hørthe Omdal

Henri Sivonen  wrote:

The what's currently in the spec is terribly counter-intuitive in this
regard.


The spec has a bug where it is contradicting itself in some steps. That
makes it very hard to read and confusing for those who read those steps.

I can see now how it does handle the art-direction case as well. I  
think it's a shame that it's a different syntax to media queries but on  
the plus side, if it maps directly to imgset in CSS, that's good.


It seems to me that Media Queries are appropriate for the
art-direction case and factors of the pixel dimensions of the image
referred to by src="" are appropriate for the pixel density case.


Yes, the late load (or extra load). And the early load.


I'm not convinced that it's a good idea to solve these two axes in the
same syntax or solution. It seems to me that srcset="" is bad for the
art-direction case and  is bad for the pixel density case.


Wll. Not too sure I agree with /all/ of that. I agree in general,
but I also think that the early load model should be allowed to fetch
based on viewport size straight away.

If I have to choose between "loading image late, when mq etc engine has
started, being very flexible" and "loading image fast, not flexible at
all, just browser magic" - I'd go for the second one.


Even though we want to serve small images to my mobile phone, I'd still
like for it to be as fast as the browser is able to handle.

But it is merely meant for different sizes of the same content image
then. Only doing pixel densities feels very limiting. A bit too limiting
to be useful for the non-art directed "I just want it to go fast".

--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] So if media-queries aren't for determining the media to be used what are they for?

2012-05-16 Thread Odin Hørthe Omdal

Tim Kadlec  wrote:
The lack of em support is a concern though I understand the  
complications you have brought up.


Using ems for media queries (which in turn dictate layout which in turn  
dictates the image I want to load) is increasingly looking like a much  
wiser decision than using pixels. A perfect example are devices such as  
the Kindle Touch which have a much higher default font size. A real  
world example, and case study, can be found here:  
http://blog.cloudfour.com/the-ems-have-it-proportional-media-queries-ftw/


I don't think it is fit for this round of spec. It is in direct conflict
with preloading/prefetching. It's a different model and requires a
different fix.

Model 1, before load: do image decision, fetch image while loading the page
Model 2, after load: load page, do image decision after layout

srcset is using model 1, which is faster and in the same way images are
done today. I don't think you'll be able to convince vendors to ditch
that optimization.

media queries is doing model 2. I suggest we find a way to do that with
javascript. Maybe some form of deferring image loading at all, saying
that "I will fetch image on my own". Then you can do the delayed image
loading that would need to happen in a media query world.

Having a fix for model 1, doesn't hinder something for model 2 to come
at a later point.

Now suppose that for that layout displayed in their screenshot, the  
header image was changed to be a vertical oriented phone and the size  
was reduced. In that case, I would want srcset to kick in with a  
different image. It sounds like it would not be able to accomplish this  
right now?


No, you're right about that. Or it could work in the current proposal,
but I don't really think it's worth it.

The spec does have an algorithm for updating the image that does a new
fetch and show, but it's not required to run it. So you can't really
depend on it. But it can work. If it has already fetched a bigger
image, and has that in cache, it might not want to fetch a smaller one
when you rotate though. Why show something of worse quality than what
you already cache?

If the intrinsic sizes are different, well, the user agent doesn't
know that until it has downloaded the image anyway.


IMHO that should rather be done with a model 2. That means, in the
short term: finding a way to solve it using client side javascript.


So in clear text: I don't think that should be supported by imgsrc.
That's a job for a media query. Model 2.


Forgive me if I'm just missing something. It's early and my coffee  
hasn't kicked in quite yet. :)


PS: I would be very happy if you didn't top-post, and also trimmed your
quotes so that it's easy to follow and read (I read email on my phone
when I'm out, and I love when people write emails that work nicely on
the phone).

--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] / not needed

2012-05-16 Thread Odin Hørthe Omdal
On Wed, 16 May 2012 11:50:51 +0200, Matthew Wilcox  
 wrote:

So wrap an image in SVG? I don't see this as being very clean.


The problems Tab pointed out are enough for it to not meet the use cases  
anyway. So it doesn't matter one way or the other. :-) Using SVG like this  
is not a suitable solution.



--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] for responsive bitmapped content images

2012-05-16 Thread Odin Hørthe Omdal
On Wed, 16 May 2012 11:22:07 +0200, Julian Reschke   
wrote:



Inventing a new microsyntax is tricky.
 - "comma separated" implies you'll need to escape a comma when it  
appears in a URI; this may be a problem when the URI scheme assigns a  
special meaning to the comma (so it doesn't affect HTTP but still...)


Indeed.

Edward did not write it all as a spec, though, so cases like that might be  
a bit detailed for a first proposal. Hixies extension of srcset does  
however have some spec text, and that does in fact handle your first case:


<http://www.whatwg.org/specs/web-apps/current-work/multipage/embedded-content-1.html#processing-the-image-candidates>

 - separating URIs from parameters with whitespace implies that the URIs  
are valid (in that they do not contain whitespace themselves); I  
personally have no problem with that, but it should be kept in mind


Me neither. I'm no big fan of non-valid URIs. :-)

--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] So if media-queries aren't for determining the media to be used what are they for?

2012-05-16 Thread Odin Hørthe Omdal
On Wed, 16 May 2012 10:33:05 +0200, Matthew Wilcox  
 wrote:

Am i right in believing that the srcset attribute are limited to
pixels? A unit that's dying out in all responsive designs? Is it
extensible to em, % etc? Because that's what's used.


I'm afraid you are confusing a lot of stuff together here.

You can use em and % freely in your stylesheets/CSS. The values from  
srcset is used to fetch the right resource during early prefetch, checked  
against the width and height of the viewport (and only that viewport).


Having ems or % would make no sense whatsoever there, because you don't  
know what they mean. 50% of what? The viewport size? You would basically  
make it always matching. Because 50% will always be half of whatever your  
viewport size is...


Say you write

If viewport is  50px wide, mycoolpic will match   25px, myotherpic will  
match 37.5px, so it'll pick myotherpic.jpg
If viewport is  200px wide, mycoolpic will match 100px, myotherpic will  
match 150px, so it'll pick myotherpic.jpg
If viewport is  500px wide, mycoolpic will match 250px, myotherpic will  
match 375px, so it'll pick myotherpic.jpg
If viewport is 1000px wide, mycoolpic will match 500px, myotherpic will  
match 750px, so it'll pick myotherpic.jpg


See the pattern? It'll always pick the one closest to 100%, no matter  
what. It doesn't make sense.


Also note: the width of an image, and the value you write in srcset,  
doesn't need to have any connection at all. It's only used to match  
against the viewport to choose what picture the user agent will fetch.



This example will make your logo a smaller mobile version when your  
viewport width is 600 or smaller (you should also have some media queries  
in your stylesheet that change the layout at that stage):


  

Here, the logo-50px.jpg will only be loaded if your viewport width is less  
than 600. It'll choose logo-150px.jpg for everything else.



CSS (and the domain media queries work in), and resource fetching that  
works with prefetching/-loading is two totally seperate things/problems.


If you make a solution that will support em/% in a meaningful way, you  
would have to wait for layout in order to know what size that means. So  
you will have slower-loading images, and ignore the "we want pictures  
fast" requirement.


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Correcting some misconceptions about Responsive Images

2012-05-16 Thread Odin Hørthe Omdal

Thank you for the well written email.

On Wed, 16 May 2012 09:13:01 +0200, Tab Atkins Jr.   
wrote:

3. "@srcset doesn't have good fallback behavior". Yup, it does. The
simplest way is to just do the simplest thing: provide both a @src and
a @srcset, and that's it.  This has good behavior in legacy UAs, and
in new ones.  The only downside of this is that if you're polyfilling
the feature in legacy UAs, you get two requests (first the @src, then
whatever the JS changes it to).

If this is problematic, there's a more verbose but still simple way to
handle this (afaik, first suggested by Odin):




It was not first suggested by me, I shopped around in the RespImg CG and
on different blogs and comments and articles and picked that up
somewhere along the path.

I think Scott Jehl's "Some early thoughts on img@srcset in the real
world" might be the first place I saw it:

   <https://gist.github.com/2701939>

Although he said something to the effect that "plausible, but may have
some issues."

Hence my proof of concept *demo* of a srcset polyfill that optimizes for
few requests:

  <http://odin.s0.no/web/srcset/polyfill.htm>

To show that it can work. The example I'm using is:




It'll work without javascript, only showing one alt text.

With javascript on, it'll copy the alt text to the real  and take
away the display:none (which is only there to hinder IE showing a broken
image icon when you have no Javascript).


In modern UAs, JS just has to remove the @style.  In legacy UAs, JS
removes the @style and sets the @src appropriately (the data: url
ensures that there's not an extra request before JS activates).  If JS
isn't turned on, the first image just never displays, but the second
one does.  This is more complicated and a bit more fragile than the
previous solution, but it only ever sends a single request.


Yes. I have a live test for something like that. It works in all devices
and browsers I've got access to. Which is not really very much, but
should cater for quite a large part of the internet. (:P)

It's at the same page ( http://odin.s0.no/web/srcset/polyfill.htm ).

--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] Correcting some misconceptions about Responsive Images

2012-05-16 Thread Odin Hørthe Omdal
On Wed, 16 May 2012 09:42:46 +0200, Chris Heilmann   
wrote:
style="display:none;">


So we praise the terse syntax of it and then offer a NOSCRIPT for  
backwards compatibility? Now that is a real step back in my opinion.


Please, read Tab's full email. No need to willfully mislead people just to  
create a flame war like this.


You know as well as we do as that the backwards compat story is:



Extra  is only for a *Javascript* polyfill that will give you  
the behavior in current browsers. That means, only those who absolutely  
want switching to work with browsers not having implemented it, should use  
something like this.


In fact, polyfilling other solutions will require the exact same, because  
you'd have to cater for people having Javascript turned off.


However, you can also polyfill the simple version, but you would get two  
requests in some browsers if you do that. So you can optimize what you  
want. The only thing we're talking about


--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] So if media-queries aren't for determining the media to be used what are they for?

2012-05-15 Thread Odin Hørthe Omdal
Silvia Pfeiffer  skreiv Wed, 16 May 2012  
00:57:48 +0200



Media queries come from the client side. They allow the author of a web
page to tell exactly how she want to lay out her design based on the
different queries. The browser *HAS* to follow these queries. And also,
I don't think (please correct me if wrong) the media query can be subset
to only the stuff that's really meaningful to do at prefetch-time.

The srcset proposal, on the other hand, are purely HINTS to the browser
engine about the resources. They are only declarative hints that can be
leveraged in a secret sauce way (like Bruce said in another mail) to
always optimize image fetching and other features. If you make a new
kind of browser (like e.g. Opera mini) it can have its own heuristics
that make sense *for that single browser* without asking _anyone_.
Without relying on web authors doing the correct thing, or changing
anything or even announce to anyone what they are doing. It's opening up
for innovation, good algorithms and smart uses in the future.

That's the basic difference, totally different.


If that's the case, would it make sense to get rid of the @media
attribute on  elements in  and replace it with @srcset?


Video is at least a bit different in that you don't expect it to be fully  
loaded and prefetch at such an early stage as img. But I've been thinking  
about that since I read something like "we already have media queries in  
source for video, but it's not really implemented and used yet".


I'm not sure. What do you think? As far as I've seen, you're highly  
knowledgeable about . Why do we have mediaqueries on video element?  
Do we have a use case page? Doing the same as whatever  ends up doing  
might be a good fit if the use cases are similar enough. Would be nice to  
be consistent if that makes sense.


--
Odin Hørthe Omdal · odinho / Velmont · Opera Software


Re: [whatwg] So if media-queries aren't for determining the media to be used what are they for?

2012-05-15 Thread Odin Hørthe Omdal

Andy Davies  wrote:

Looking at the srcset proposal it appears to be recreating aspects of
media-queries in a terse less obvious form...

We've already got media queries so surelt we should be using them to
determine which image should be used and if media-queries don't have
features we need then we should be extending them...


Ah! What a truly great question, so simple.

The answer is: no, it is not media-queries although they look like it. A
big problem is that it's so easy to explain it by saying "it's just like
media-query max-width", rather than finding the words to illustrate that
they are totally different.

The *limited effect* also feels similar which doesn't help the case at
all.

So, even though I have a rather bad track record of explaining any
thing, I'll try again:

Media queries come from the client side. They allow the author of a web
page to tell exactly how she want to lay out her design based on the
different queries. The browser *HAS* to follow these queries. And also,
I don't think (please correct me if wrong) the media query can be subset
to only the stuff that's really meaningful to do at prefetch-time.

The srcset proposal, on the other hand, are purely HINTS to the browser
engine about the resources. They are only declarative hints that can be
leveraged in a secret sauce way (like Bruce said in another mail) to
always optimize image fetching and other features. If you make a new
kind of browser (like e.g. Opera mini) it can have its own heuristics
that make sense *for that single browser* without asking _anyone_.
Without relying on web authors doing the correct thing, or changing
anything or even announce to anyone what they are doing. It's opening up
for innovation, good algorithms and smart uses in the future.


That's the basic difference, totally different. :-)


With mediaqueries, you don't know at the time when you're prefetching an
image, what box it is in. So many media queries will either not make
sense (so they won't work like authors intend them to), OR the browser
would have to wait until it has layout for it to start fetching images.
Neither of these two would actually be good, so they are in conflict.

I'd also like to give an example on the "smart uses in the future" for
imgsrc; right-click save could fetch the biggest quality image and save
that one instead of the one it has currently fetched.


Bruce Lawson  skreiv Tue, 15 May 2012 23:46:44 +0200

Just so I understand

1) the 600w 200h bit replicates the functionality of the familiar Media  
Queries syntax but in a new unfamiliar microsyntax which many have  
argued is ugly, unintuitive and prone to error  
(http://www.w3.org/community/respimg/2012/05/11/respimg-proposal)


No. It only works on device-width also, and it's only a hint, so it's
actually part of your part 2:

2) The new bit is the descriptors of pixel density (1x 2x etc). This  
isn't "media queried" because the precise mechanism by which one image  
is chosen over the other is left to the UA to decide based upon  
heuristics. Those heuristics may be secret sauces that give a browser a  
competitive advantage over another; they may be based on data the  
browser has accumulated over time (On my current "Bruce's bedroom WiFi"   
I know I have medium network speed but very low latency so I will tend  
to favour images with characteristic X) and so which aren't available to  
query with MQs because MQs are stateless; they may be based upon certain  
characteristics that could conceivably be available to MQs in the future  
(Do I support JS? Am I touch enabled?) but aren't yet.


Is that accurate?


Yeah, sounds more like it. But it applies to the whole thing.

I'm sympathetic to (2); why require a developer to think of and describe  
every permutation if the environment, when she could instead describe  
that which she knows - the images - and then allow the UA to take the  
decision. As time goes on, UAs get cleverer, so behaviour improves  
without the markup needing changing.


Exactly.

But it doesn't seem necessary to saddle authors with (1) to acheive (2),  
as far as I can see.


It's heavily optimized for the usecase that will happen most often: for
"retina" type displays:





bruce-speaking-for-myself-not-Opera


I'm not speaking for Opera either, but we do work for Opera, and it's
hard to disclaim everything always.


I hope it made sense.

--
Odin Hørthe Omdal (odinho/Velmont) · Opera Software


Re: [whatwg] Implementation complexity with elements vs an attribute (responsive images)

2012-05-13 Thread Odin Hørthe Omdal
Kornel Lesiński said:
> Odin said:
>> Actually, for this to work, the user agent needs to know the size of the
>> standard image. So:
>>
>>   > srcset="d...@2.jpg 2x, dog-lo.jpg 500w">
>>
>> So if you've got the smartphone held in portrait, it's 250 css pixels
>> wide, and so 500 real pixels, it could opt to show dog-lo.jpg rather
>> than dog.jpg.
>
>But still displayed at 960 CSS pixels or course? That'd be fine (and the  
>UA could even download dog@2x when user zooms in).

Yes, that's a good thing to pinpoint. The picture will be in a 960 CSS
pixels box, but depending on the stylesheet - maybe

img { max-width: 100%; height: auto }

It will ofc resize the backed picture down to fit that rule, when it
comes to the layout part. But yes, the picture will behave as if it is
960 px wide, only with lower dpi (resolution). Just the opposite of 2x
in fact.


All optional replacements of the src will have to be fitted in the same
box as the original src. That might actually require you to specify both
width and height upfront. Of course, people won't really do that, so I
guess we're bound to get differing behaviour... Hm.

What do people think about that? What happens here? You have no info on
the real size of the picture. I guess maybe the browser should never
load any srcset alternatives then? If you have no information at all
it's rather hard to make a judgement.

A photo gallery wants to show you a fullscreen picture, and give you:

   

In this example, us (humans :P) can easily see that one is 2048 px and the
other 4096 px. If I'm viewing this on my highres Nokia N9, a naïve
implementation could pick the 2x, because it knows that's nicely highres
just like its own screen.

But it would actually be wrong! It would never need anything else than
the 2048 px for normal viewing because it is anyway exceeding its real
pixels on the screen.

-- 
Odin Hørthe Omdal (odinho, Velmont), Core, Opera




Re: [whatwg] Implementation complexity with elements vs an attribute (responsive images)

2012-05-13 Thread Odin Hørthe Omdal
Kornel Lesiński wrote:
> Selection of 1x/2x images is relevant only as long as we have 100dpi
> screens and slow connections, and both will disappear over time.

Well, the web is a huge place. I'm quite sure that'll take ages and
ages if it ever happens at all (I don't think it'll ever disappear).
Might get irrelevant for some small and specific markets/segments though.

> How about that:
>
>  
> media="max-width:4in">
>   
> 
>   
>  
>
> Instead of srcset it could be src2x or another attribute that specifies  
> image for higher screen density and/or bandwidth. The point is that  
> media="" would allow author to choose image version adapted to page  
> layout, and another mechanism connected to  would allow UA to  
> choose image resolution.

Seeing it here in code it's actually not such a monster that I'd said
it'd be. So I like it even more, and it's the obvious way for these to
interact.

I think it'd be a mistake to call it src2x though, -- it feels very
specific. You can scale up to double then, but you can't necessarily go
beyond that: going down for e.g. mobile.

OTOH, 2x will be the most common usage at least as far as I can tell.

  

  vs.

  

is not really all that different, but the second should be more
flexible. Also downscaling:

  

Actually, for this to work, the user agent needs to know the size of the
standard image. So:

  

So if you've got the smartphone held in portrait, it's 250 css pixels
wide, and so 500 real pixels, it could opt to show dog-lo.jpg rather
than dog.jpg.

-- 
Odin Hørthe Omdal, Core, Opera Software




Re: [whatwg] runat (or "server") attribute

2012-05-13 Thread Odin Hørthe Omdal
>> Just use type="text/server-js"...
> Is that really a good idea? It seems odd to use a mime type for such a
> reason.

I thought it was quite a nice idea.

Why would it not be? It's describing what's in the script tag, just like it's 
supposed to do. It's even a quite much used pattern for doing client side 
templates 

Re: [whatwg] Implementation complexity with elements vs an attribute (responsive images)

2012-05-13 Thread Odin Hørthe Omdal
Jason Grigsby wrote:
> David Goss wrote:
>> A common sentiment here seems to be that the two proposed responsive
>> image solutions solve two different use cases:
> 
> After skyping with Mat (@wilto) last night, I think I may be the only
> one who didn’t fully grok that the mediaqueries in  could be
> used to address both use cases. 
>
> It is still unclear to me if  would address both.

Oh but they *do* solve two different use cases. Mediaqueries in
 _may_ be able to address both, but  is not, and
never will. It's simply not designed for it.

So, why then do I prefer srcset when it comes to solving the «save
bandwidth»/use correct "weight" of resource use case?


Because it's design is "browser choose" instead of "web author choose".
It puts these decisions into the browsers control.  The easiest path for
the developers, is also what will make the browser be able to be a good
agent to its user, and decide what it shall download.

The srcset attribute is also much simpler, which makes me think it'll
be used more and also in the correct way. There's strong correlation
between simple and much used.

 for solving this use case is/will be over engineered, just
because it can be done, doesn't mean it should. For the other use case,
adapting the image for different content, it might be a good candidate
though.  inside  should also get a srcset attribute
then, so that it would be possible to choose different qualities if
they exist. For relatively advanced sites, it will look like a monster
though, so that's something to be looked more into.


David Goss wrote:
> Connection speed
> As an extension of the iPad example above, it would also be
> irresponsible to serve the high res image to users that do have a high
> pixel density display but are not on a fast internet connection for
> whatever reason. So you might write:
> 
> 
> 
> 
> 

As I said, this is one of my big gripes with this proposal. I don't
think this'll work satisfactorily, it puts the burden on figuring out
what is correct for that user into the page author's control. That model
skews the responsibility so that instead of only a few browser engines
having to get it right, millions of web page authors have to get it
right.

AND they have to update their sites and mediaqueries when we get
something new to optimize for. I don't think they will do that, based on
how extremely big the problem with -webkit-prefixes are.

I've seen enough of the web to be sceptical.


What if a the author doesn't write that min-connection-speed query
there? And who is the author of a page to decide such things anyway?
What about latency?  Should there be a max-latency: 10ms, there as well?
What about cost? I have a fast line, but it costs money to download
stuff, so I'd like the smaller pictures. What about if I have slow
internet connection, but I'd want to print the page, thus getting the
best version of the image? Or if I'm sitting with my new fancy hires KDE
Tablet and loading your page in a background-tab, letting it render and
be finished while I continue to read the page I'm on (the browser might
very well want to load the highres picture then, even though the
connection is slow, but with MQ, not that lucky).

> (... containing element width/height)
> As I understand it, the  syntax would have to keep getting
> extended every time we wanted to test a different property.

No. It wouldn't, because it only describes the images, nothing more.

Given:

  

Say if you're in a browser optimizing for low bandwidth usage, and some
quality at the cost of speed.  The viewport is 800x600.  In the normal
case, the browser would choose hero.jpg because it fits well with its
resource algorithm. However, since being in the special mode, it defers
the prefetch of the image and waits for layout, where it can see that
this picture lies inside a 150px wide box - so it fetches hero-lo.jpg
because it doesn't need more.

With the MediaQueries proposal, you'd need loads of advanced information
to do the same. The browser could not just infer this on its own and
change its algorithms to do it.

Bandwidth might be expensive even though it's fast, so althought they
have over 1mbit speed, they want to get low pictures. With media queries
there's just so many variables, and so much to choose from.


srcset only chooses between different *qualities* of the same image
whereas who knows what mediaqueries does? It's not possible to not do
anything the web page author hasn't told you about. That's why srcset is
so much more powerful for its use case.


The browser is in a better position to decide what quality of image it'd
like to fetch.

The content author, however, is in a better position to lay out the
different content images based on their set of mediaqueries. I can see
a few use cases there, but they are orthogonal to what we're talking
about here.  I think it's something worth solving, but doing it
inside srcset is not where.

-- 
Odin Hørthe Omdal (odinho/Velmont), Core, Opera Software





Re: [whatwg] [Server-Sent Events] Infinite reconnection clarification

2012-04-27 Thread Odin Hørthe Omdal

I think I should do a TLDR since I didn't really get any answers:

1. Should EventSource *ever* time out once it has once been connected?
2. What do browsers do today? What do you think is a good thing to do?

I tried Opera, Firefox and Chromium for question 2.

Opera: Gives up after 2-3 minutes.
Firefox: Gives up after ~15 minutes.
Chromium: Doesn't ever give up. Longer and longer retry intervals, upper  
limit (I think) of 1 minute between each retry.




And the TL version follows:

On Tue, 17 Apr 2012 16:44:56 +0200, Odin Hørthe Omdal   
wrote:



If I understand correcly, the spec expects the implementation to keep
reconnecting indefinitely, when the network cable is yanked. It is a
strong feeling, but it'd be nice to get it clarified in plain text in the
spec.

Clients will reconnect if the connection is closed; a client can be  
told to stop reconnecting using the HTTP 204 No Content response code.



CLOSED (numeric value 2)
The connection is not open, and the user agent is not trying to  
reconnect. Either there was a fatal error or the close() method was  
invoked.


The task that the networking task source places on the task queue once  
the fetching algorithm for such a resource (with the correct MIME type)  
has completed must cause the user agent to asynchronously reestablish  
the connection. This applies whether the connection is closed  
gracefully or unexpectedly. It doesn't apply for the error conditions  
listed below.



And this is the place a small clarification could come in handy:

Any other HTTP response code not listed here, and any network error  
that prevents the HTTP connection from being established in the first  
place (e.g. DNS errors), must cause the user agent to fail the  
connection.


Maybe "Network errors after a successfully established connection must
cause the user agent to try reestablishing the connection indefinitely."

Or something better. At least, make it clear what is going to happen. :-)


On that note, it'd also be nice to hear what the other vendors do with  
the

connection. It seems like both Firefox and Chromium has an exponential
fallback with a max-value between the reconnection tries. The first tries
will probably respect the given *reconnection time*, but after a while
that'll be too often.

I tried yanking the network for 10+ minutes, and when I put the cable in
again, both Firefox and Chromium used 25 seconds to reconnect. When only
yanking it for one minute, the reconnection was much faster (2-5  
seconds).

This with *reconnection time* set to 500ms.




--
Odin Hørthe Omdal (Velmont/odinho) · Core, Opera Software, http://opera.com


Re: [whatwg] [media] startOffsetTime, also add startTime?

2012-03-08 Thread Odin Hørthe Omdal
On Thu, 08 Mar 2012 12:50:41 +0100, Ingar Mæhlum Arntzen  
 wrote:

Here's my reasoning. The progress value that is visualized in the video
element (i.e. currentTime) is part of the end-user experience. For this
reason it is important that it communicates the appropriate abstraction
consistently to all end-users.



Ah, but that is up to the user agent to decide how to show the time code.  
The currentTime should be normalized from 0 until duration. That makes the  
API behave in a common way for all easy tasks. If you write a video player  
for your small cat clip, that video player will also work with streaming  
video without any problem. That is a good thing.


However, the user agent is free to show you (the user) your "real"  
position. And I agree that doing that makes sense. They don't exclude  
eachother.



Maybe "joinTime" or some other property could be added to hold that
information (which Chromium appears to lack - according to Sean O'Halpins
comments).

Alternatively, to match you suggestion, if it is the sum ("startTime" +
"currentTime") that is visualilzed in the video element, that might be OK
too, but possibly more phrone to confusion?


Only video player authors will actually see and use those attributes. They  
should be built for being robust and working nicely for different usages.  
Like I said, making the "dumb video player" also work for live streamed  
video without any changes.


If you want to do a more advanced media player that is live video  
streaming aware, you will have to opt-in to that instead. All the same is  
possible, only one way is more backward-proof than the other.


Philip Jägenstedt proposed "offsetTime" for what we've called "startTime",  
which IMHO is a clearer name.




In addition, I wonder if negative values for currentTime are legal. For
instance, when streaming a Formula 1 race that starts at 17.00, I would  
not

be surprised to see negative currentTime if I join the stream before the
race starts.


They are not, and shouldn't be. currentTime is always normalized to 0 ->  
duration.


However, you would be perfectly able to write a video player that does  
that by using offsetTime and currentTime together. Even better, the  
proposed "currentDate" exposes the underlying "date of recording" (or  
similar date) of the media, which you can then just look for 2012-03-08  
17:00. Actually, you could also build your video player to show that date  
on-screen, because 17:00 on the screen might be 18:13 at my place, because  
a) I'm in a different time zone, and b) there's 13 minutes worth of  
buffering between the Formula 1 production cameras and my computer.


--
Odin Hørthe Omdal · Core QA, Opera Software · http://opera.com /


[whatwg] [media] startOffsetTime, also add startTime?

2012-03-07 Thread Odin Hørthe Omdal
startOffsetTime seem to leave people confused, I often have to explain it,  
and yesterday I read the spec[5] and old emails and got confused myself.  
It hasn't been implemented after almost 2 years.



Having the UTC time of the clip you're getting would be very useful. But  
it'd be really nice to get the start of the non-normalized timestamp  
exposed to javascript for synchronizing out-of-band metadata with the live  
streamed media.


Browsers are currently supposed to take the timestamp and normalize it to  
0 for currentTime. Chromium currently does not do that; it starts at 3:15,  
if I join a streamed video that I started streaming 3 minutes, 15 seconds  
ago.


I don't think using the UTC time as the sync point is very stable at the  
moment. It'd be a much quicker, stable, and easier win to get a startTime,  
timelineStartTime or timeSinceStart or similar that exposes the  
NON-normalized timestamp value at the start of the stream. So that, if you  
do


   startTime + currentTime

you're able to get the actual timestamp that the stream is at, at that  
point. And in contrast with startOffsetTime this one won't ever change, so  
startTime + currentTime will always be continuously increasing.




The Date UTC which startOffsetTime would use, seems to be varying quite a  
bit. You need to know your streaming server and what it does in order to  
understand the result. Even different media from the same server might  
give different results if the streaming server implementation just reads  
the UTC time directly from the file. The information could be useful, but  
for more advanced uses.



startOffsetTime and initialTime came out of this conversation in 2010:
  
<http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-May/thread.html#26342>

And introduced here:
  <http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-August/028004.html>


Sean O'Halpin of BBC recently mentioned[2] some of the confusion:

There seems to be some confusion here in how the HTML5 media elements  
specification is dealing with logical stream addressing versus physical  
stream addressing. The excerpt above talks about a user agent being able  
to "seek to an earlier point than the first frame originally provided by  
the server" but does not explain how this could possibly happen without  
communication back to the server, in which case we are effectively  
dealing with a request for a different physical resource. At the very  
least, the fact that the Firefox and Chrome teams came up with different  
interpretations shows that this part of the specification would benefit  
from clarification.



And an earlier blog post about startOffsetTime specifically[3]:

The reason for setting this out is that we'd like to see consistent  
support for startOffsetTime across all commonly used codecs and for  
browser vendors to bring their implementations into line with the  
published HTML5 media elements specification. There are ambiguities in  
the specification itself, such as the interpretation of 'earliest  
seekable position', which could be clarified, especially with respect to  
continuous live streaming media. Browser vendors need to agree on a  
common interpretation of attributes such as currentTime so others can  
experiment with the exciting possibilities this new technology is  
opening up.




Sooo... It would be nice to get some real cleanups to the whole media +  
time thing. :D




1.  
<http://www.whatwg.org/specs/web-apps/current-work/multipage/the-video-element.html#offsets-into-the-media-resource>
2.  
<http://www.bbc.co.uk/blogs/researchanddevelopment/2012/02/what-does-currenttime-mean-in.shtml>
3.  
<http://www.bbc.co.uk/blogs/researchanddevelopment/2012/01/implementing-startoffsettime-f.shtml>

--
Odin Hørthe Omdal · Core QA, Opera Software · http://opera.com /


Re: [whatwg] [CORS] WebKit tainting image instead of throwing error

2011-10-06 Thread Odin Hørthe Omdal

On Thu, 06 Oct 2011 18:11:54 +0200, Adam Barth  wrote:

If they actually want a fallback, they can easily just reload the  
picture
without crossorigin, and they will probably get the cached image  
directly

from the browser (because it already has it, only won't show it).

Obviously, if there hadn't been a crossOrigin-attribute, this would be  
the

nice way to handle all image fetching.

It sounds like you're arguing that it's better for developers if we
fail fast and hard, which is the opposite of how most of the web
platform is design (vis HTML versus XML).
The arguments revolving around wishful thinking about how the world
should have been don't carry much weight for me.



Well, you're violating the specification. And this is something quite  
different from XML versus HTML.



And also, we're doing the same on XHR. If you set xhr.withCredentials and  
the server do allow your origin, but doesn't allow credentials, you just  
don't send a request without credentials and hope the author doesn't see  
it. That will throw an error.


For new stuff like this, there's no reason being loose. If something  
doesn't work in any browser at all, they will fix it, if it works in one,  
but not any other they will think all the other browsers are doing  
something wrong.


In the spec, you'll get "notified" that your picture won't be tainted, --  
in WebKit's implementation it will just crash when you really try.




Anyway, for my part we could've just not had the "crossorigin" attribute  
at all, and just send Origin-header to all cross-origin images. But then  
everyone needs to do the same thing, and it would apparently also break  
some sites (  
http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-June/032212.html  
).


--
Odin


Re: [whatwg] [CORS] WebKit tainting image instead of throwing error

2011-10-06 Thread Odin Hørthe Omdal

On Thu, 06 Oct 2011 17:05:29 +0200, Adam Barth  wrote:


The reason it's implemented like that is because I didn't add any new
security checks.  I just expanded the canvas taint-checking code to
understand that a CORS-approved image could pass.


Ok, so not really intended then. Good :-)


w.r.t. to blocking the whole image, there isn't any security benefit
for doing so (if we did so, attackers would just omit the crossorigin
attribute).  If you want to prevent folks from embedding the image,
you need something that works regardless of how the image was
requested (like From-Origin).


Well. I could, as server operator, block everything that didn't have an  
Origin-attribute. It wouldn't work then for browsing normal pages on your  
own, but maybe for a special api of some such.



Anyway, that was really never my concern; the whole reason for actually  
having a crossorigin-attribute on the image would be because you want to  
get that extra check so you can be able to use it like you want.


With WebKit now, if I built such an application, I wouldn't have any nice  
and obvious method of knowing whether I really could use the picture or  
not. Throwing an error on the image on the other hand makes it fail early,  
before I do all the canvas-processing.



Since the crossOrigin attribute exist, it'd make sense to have it behave  
as a real way for you to say «I'm going to use this and I need an explicit  
allowance».


Right now, the attribute doesn't really do anything from the author's  
point of view. It /may/ make an untainted image, or it might not. It's  
obviously different from always making tainted images (like not using the  
attribute would), but I think the whole reason why someone would go to the  
extra trouble of adding «crossOrigin» is if they really want an untainted  
image.


If they don't care about tainting, and just really want the picture, they  
can refrain from setting the crossOrigin attribute.



If they actually want a fallback, they can easily just reload the picture  
without crossorigin, and they will probably get the cached image directly  
from the browser (because it already has it, only won't show it).




Obviously, if there hadn't been a crossOrigin-attribute, this would be the  
nice way to handle all image fetching.


--
Odin Hørthe Omdal,
Opera Software


[whatwg] [CORS] WebKit tainting image instead of throwing error

2011-10-04 Thread Odin Hørthe Omdal

If the CORS-check did not succeed on http://crossorigin.example.net crossorigin>, this should happen
according to spec:

Discard all fetched data and prevent any tasks from the fetch algorithm  
from being queued. For the purposes of the calling algorithm, the user  
agent must act as if there was a fatal network error and no resource was  
obtained. If a CORS resource sharing check failed, the user agent may  
report a cross-origin resource access failure to the user (e.g. in a  
debugging console).


<http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#potentially-cors-enabled-fetch>


In this scenario an author wanting to do some canvas processing with the
image, will be able to check img.onerror to see whether she can use that
image. The image won't load on a failed check. Gecko does this.

WebKit, on the other hand, only taints the image and loads it anyway,
breaking the spec. The error will instead crop up in a way that is more
verbose and harder to miss when she tries to read the image data out.


Is WebKit's method a lesser surprise than the image just not showing up
(if they don't check for thrown error)? It'd be nice to hear why it's
implemented like that, if there are any good reasons. WebKit's method
seemed most obvious to me at first, but after investigating a bit I'm not
sure anymore...

--
Odin Hørthe Omdal
Core QA, Opera Software


Re: [whatwg] and elements

2011-09-05 Thread Odin
On Sun, Sep 4, 2011 at 10:43 PM, Shaun Moss  wrote:
> Yes, but this is not semantic!!! Comments are not articles. They are
> completely different. Comments can appear in reference to things that are
> not articles (such as status updates), and therefore would not appear inside
> an  tag - so how would the browser recognise them as comments?

It is semantic.

Comments *are* in fact articles. You're thinking of it in the wrong
way. Article is not a newspaper article, but something that would make
sense to stand on its own.

So, a *nested* article is defined to be dependent on the outer
article, but still it is it's own content and can be syndicated as a
individual content piece that's related to the parent article.

It makes perfect sense and is quite beautiful and doesn't require a
whole slew of tags. It's very nicely done.

And comments /are/ syndicated. Just look at WordPress. When I read
blogs in Liferea, I get the blog posts, as well as each individual
comment loaded from the syndicated comment-stream from that particular
blog post.



  HTML5 is great
  Yup. It is.
  By Me

  
You're so correct!
By Ben
  

  
Better than butter, I say
By Adam
  





Perfect is the enemy of good. Cue in xhtml2. :-)

-- 
Beste helsing,
Odin Hørthe Omdal 
English, technical: http://tech.velmont.net
Norsk, personleg: http://velmont.no


Re: [whatwg] and elements

2011-09-04 Thread Odin
On Sun, Sep 4, 2011 at 8:14 AM, Shaun Moss  wrote:
> I've joined this list to put forward the argument that there should be
> elements for  and  included in the HTML5 spec.

We already have a comment tag. It's listed in the article-element
section of the spec. Article within article is suggested to be a
comment:

http://www.whatwg.org/specs/web-apps/current-work/multipage/sections.html#the-article-element

> When article elements are nested, the inner article elements represent
> articles that are in principle related to the contents of the outer article.
> For instance, a blog entry on a site that accepts user-submitted
> comments could represent the comments as article elements nested
> within the article element for the blog entry.


 is no good idea exactly how Rand McRanderson explained it. You
can read it on the wiki as well:
http://wiki.whatwg.org/wiki/Rationale#Failed_proposals

-- 
Beste helsing,
Odin Hørthe Omdal 
English, technical: http://tech.velmont.net
Norsk, personleg: http://velmont.no


Re: [whatwg] HTML5 based Eduvid Slidecasting Demo

2011-05-22 Thread Odin
On Sun, May 22, 2011 at 3:03 PM, Narendra Sisodiya
 wrote:
> infact I too want to do same..

Cool!

> Basically you want to send slides (via websocket or comet) and sync with
> video..

Yes. My old system (early Errantia) did that, using comet.

> Here is the mistake. Sending Slides in real time has no use. Please
> do not take me wrong. Basically in almost any conference or workshop,
> presenter already has slides ready. So if something is ready then you can
> directly push it to user. send video/audio in real time.

You misunderstand :-) The slides are no problem at all, I upload them
to the server, then when done it sends a Comet PUSH to the clients
which tells them, a new slide resides at  """
http://example.com/images/test-conference/14.jpg """. So that part is
working swell.

Also, I send the time when the slide appeared. However, the syncing to
the video/audio is the impossible part, because there's no way for the
browser to know where the video is in time/place **in a live
setting**, you can very easily do this with archived video, it's just
to read currentTime.

We had a discussion about this, and that was why startOffsetTime made
it into the spec:

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-May/thread.html#26342



As you see, there are lots of buffering of video. And such, different
people will be different places in the real time video.

So I'm watching "live" and that's only one minute "off", whilst a
friend is watching via wireless internet and is a full 10 minutes off
because of his buffering etc, etc.

In order to sync slides to the video/audio in such places (when people
connect 10 minutes into the video and get a new currentTime = 0 at
that time), you need a static reference point, but as of now; all time
is still relative. Getting startOffsetTime will get us a static time
to know when we're going to show the new slides (which will also have
a  datetime  field for syncing).

> So , No need to send slides in real time.. Send all slides at once.. and
> then keep on sending the configurations
> I will also try to work on this...

Well, for some conferences this won't work, and I already have code to
do that live, so I don't need to send slides afterwards. Anyway, both
ways work. But you need to know where people are in the video in order
to sync to the slides; and that's where startOffsetTime come in.

Alternatively you might try to control the buffers/caches, but that's
not always possible. I've tried before, and can't really get it thight
enough, there's too many variables, and Icecast might not be able to
tweak itself to make a really good low latency low buffering live
sending.

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] HTML5 based Eduvid Slidecasting Demo

2011-05-22 Thread Odin
On Sun, May 22, 2011 at 6:48 AM, Narendra Sisodiya
 wrote:
> On Sun, May 22, 2011 at 8:22 AM, Silvia Pfeiffer
> wrote:
>> This is a pretty awesome way to publish a record of your slide
>> presentation.
>
> I am working to create software to automate the Timing process..
> I did automated timing long back when I started Eduvid -
> http://wiki.techfandu.org/eduvid/Eduvid-0.011/wiki-eduvid-slideshow.html
> Hopefully I will be able to create better software where when you click
> next/forward, your timings will be saved alongwidth your slidenumber.

Hey Narendra!

When I found out about your effort I wanted to tell you about mine.
I'm building something similar, but instead of only doing it
"offline", I want the same, only live.

So, video (or just audio, as you say) and the slide sent in real time.
My big problem is obviously buffering (and thus latency) of the video.
Because of this extremely unpredictable sending, it's not very easy to
sync the slides to the video. I'm pushing out new pictures realtime,
but it comes out waay to early for most.

I'll continue to follow your project. My initial demo isn't up any
more, so I've got nothing interesting at all, but it is called
Errantia and lives here http://gitorious.org/errantia



Anyway, I'm interested in knowing if you need to have live syncing
later. If so, we'd need to get startTimeOffset implemented in the
browsers. Mozilla will wait «until things like multitrack, media
controller, streams, etc are less of a moving target». But WebKit may
implement it, but I guess they need a test case.

So I need a test case. Also heard that Microsoft is very interested to
implement features that have test cases, and maybe even be first (!)
because of that.

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] FYI: HTML usage data from Disqus websites

2011-01-25 Thread Odin
On Tue, Jan 25, 2011 at 11:43 PM, Anton Kovalyov  wrote:
> Anything in particular or just general overview of HTML5 tags usage?

Well, I find HTML5 video tags particulary interesting. As well as how
they use fallback, -- what types of video people use inside it etc.

It'd be very interesting. Of course, very early, I guess you'll find
very few, so maybe it isn't interesting. But, well.

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] FYI: HTML usage data from Disqus websites

2011-01-22 Thread Odin
On Wed, Jan 19, 2011 at 12:05 PM, Anne van Kesteren  wrote:
> On Wed, 19 Jan 2011 11:59:12 +0100, Anton Kovalyov  wrote:
>> make it happen. Of course, the data will be completely anonymous and it has
>> to be related to WHATWG's main focus. Results will be published probably by
>> me on behalf of the company.
>
> If you could give the same kind of data "webstats" gathered but for your
> dataset that would be great. But anything really might give some insight
> into what is going on on the Web. :-)
>

What about testing usage of html5-features?

Tag usage etc? Guess it will be very small, but it'd be interesting to
see. And maybe someone could do trending of it after a few years.

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] Can we deprecate alert(), confirm(), prompt() ?

2010-11-25 Thread Odin
On Thu, Nov 25, 2010 at 1:45 PM, Markus Ernst  wrote:
> Maybe, instead of your original suggestion, it might be worth thinking about
> making alert()/confirm()/prompt() dialogs styleable via CSS? Then those
> fancy JS lib dialogs would get obsolete, and we had the favour of both nice
> look and support for the special purposes of those dialogs.

Hear, hear.

I really like those dialogs, they are very easy and nice to use.

However, Internet Explorer doesn't support prompt(), which is
incredible irritating and infuriating. So I had to implement one of
those js-libs instead, and tell you what? That was NOT a nice
experience, I tried many different ones, none were as easy and
to-the-point as just the nice prompt().

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] Timed tracks: feedback compendium

2010-10-22 Thread Odin Omdal Hørthe
On Fri, Oct 22, 2010 at 1:09 PM, Philip Jägenstedt  wrote:
> Right. If we must have comments I think I'd prefer /* ... */ since both CSS
> and JavaScript have it, and I can't see that single-line comments will be
> easier from a parser perspective.

Just a small, personal subjective opinion.

I'm quite frustrated that CSS doesn't support //, because it makes
quickly disabling portions of the CSS to check how that looks much,
much harder and more time consuming. Also, // is much faster than /
and then hunting for *.

Strange how such small, non-important things in the end get very
frustrating because it's so much used.

So although it might not be easier from a parser perspective, it's at
least easier for authors. (depending on what you use comments for, but
I'd take a wild guess that most comments in WebSRT would be
single-line)

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] Timed tracks: feedback compendium

2010-10-19 Thread Odin Omdal Hørthe
On Wed, Sep 8, 2010 at 1:19 AM, Ian Hickson  wrote:
>> [...] You're also excluding roll-on captions then which is a feature of
>> live broadcasting.
>
> It isn't clear to me that an external file would be a good solution for
> live broadcasting, so I'm not sure this really matters.

The standards-loving Agency for Public Management and eGovernment here
in Norway are getting their eyes up for HTML5 video (like the rest of
the world), and are kicking the tires. I've been streaming many
conferences with Ogg Theora and using Cortado as fallback for legacy
browsers (+Safari).

Now it has come to a point that we are required to follow the WAI WACG
requirements. So we have to caption the live video streams/broadcasts.

Given the (not surprising) low support of Timed Tracks for live
streams in browsers, I'm at this point going to burn the text into the
video to be shown. However, that is no good solution long term. When
browsers implement the new startOffsetTime I will be able to send the
text via a WebSocket to Javascript and have it synced to the video
(along with the slide images).

However, it would be very nice to be able to send this to the
caption-track, and not having to reimplement a user interface for
choosing to see captions etc (I guess user agents will have that).
Also, I guess there will also be other benefits of streaming directly
as a timed track, such as the user agent knowing what it is (so that
it can do smart things with it).

Accessibility is a quite universal requirement, and it would be very
nice if live streaming could be part of the same framework.


Or what other way is there to text such live conferences; or even
bring real-time metadata from a live video?

Maybe I could even send JSON about the new slides appearing in the
metadata track? Or even send the slides (images) themselves as
data-urls in the track?

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] Live video streaming with html5.

2010-09-22 Thread Odin Omdal Hørthe
On Wed, Sep 22, 2010 at 12:36 PM, Henri Sivonen  wrote:
> In practice, live streaming works with HTTP and either Ogg or WebM in at 
> least Firefox
> and Opera (maybe Chromium, too), since Ogg and WebM don't require the length 
> of the
> video to be known in advance.

I run a HTML5 streaming business. I use icecast to send Ogg with
Theora+Vorbis. It works splendidly in Opera and Firefox. Chromium has
some problems because they use ffmpeg which is not always that good
when decoding Theora, but if I use the old, bad versions of Theora, it
also works in Chromium (and thus Chrome).

Then, I use Cortado for showing the video in Internet Explorer, --
right now I just use a normal embed, although it would be nice to have
the Cortado work as a html5-dropin and if I could use the same
javascript interface. But Cortado actually works exceptionally well,
it works better than Firefox (which has some problems, it drops out on
the connection some times, and isn't as fast/quick as Opera and
Cortado).

So live video streaming is very possible with HTML5; however, it will
be greatly helped by the new startOffsetTime attribute, which will
make it easier to sync stuff to the live stream in browser. (Think,
saying "NOW!" on the live video, and having the background change from
blue to red at just the same time, or more realistic; syncing
presentation slides shown as img-tags to the video).

WEBM-streaming is done with Fluendo streaming server, AFAIK. However,
I will wait, because much of my tool chain is Ogg-tools, and they work
remarkably well and are stable.

And Theora is just getting better and better ;-) Also, it doesn't need
a new CPU to encode.

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] The youtube html5 streaming issue reprise

2010-07-16 Thread Odin Omdal Hørthe
On Fri, Jul 16, 2010 at 6:34 AM, Shane Fagan
 wrote:
> Hey all,
>
> I just wanted to address the video streaming issue. I went looking today
> and I found that icecast streaming server[0] supports streaming of
> theora and I was thinking that it could be used in conjunction with the
> video tag to address youtube's comment about a lack of streaming issue.
> It may actually already work in the browsers already but I havent
> checked it, ill try it later in the theora enabled browsers and report
> back to anyone who is interested.

I've done that for a long time. I'm sending live webcasts out on the
web using Icecast2 and HTML5. I'm actually also building a small
little HTML5 webcasting program for web so that I can show images of
the slides people are showing when presenting simultaneously. That's
what my first and only email thread to this group was about, getting a
timestamp inside html5 (or fetching it from the Ogg container (or WEBM
in the other case)) so that I could synchronize slides and video.

The video streaming kinda works OK, but it sometimes drops out, I've
shown hour long conferences with no problem. But then again have had
people having much problem.

Actually, using the video tag for firefox and embedding cortado for IE
works good. Sad part is that those using IE gets a better experience
as Cortado is quite willing to get streaming video and show it without
any problems. :-)

But HTML5-streaming works well enough for me to actually have it as my
single income these last months. However, it needs improving if it's
to stay like that.

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] On the subtitle format for HTML5

2010-05-23 Thread Odin Omdal Hørthe
On Sun, May 23, 2010 at 1:24 PM, Chris Double  wrote:
> 2010/5/23 Odin Omdal Hørthe :
>> Anyway, as of now I'm just waiting for a way to tell my webapp what
>> slide we're on (sync with the live streaming video).
>
> Can't you use the timeupdate event, get the 'currentTime' from the
> video, and decide what slide to show based on that? Or poll
> currentTime in a setTimeout? That's how most of the JavaScript
> subtitle examples work with  at the moment I think.

No, because:

I start the streaming.

10 minutes later, Leslie connects and gets the video maybe 1 minute
delayed because of IceCast2 buffering.

Her browser (Chromium and Firefox, haven't tested Opera), starts
telling «currentTime» from when SHE started to look at the live
stream. Instead of showing when the stream started.

So it's quite impossible to use that for syncing. I asked about that
here in this list, and got the answer that this is what we have
startTime property for, -- but it is not implemented correctly in any
browsers. startTime would then maybe say 0:00:00 for most clips, but
on streaming Leslie would have 0:10:00, and then I can use that for
syncing.

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] On the subtitle format for HTML5

2010-05-23 Thread Odin Omdal Hørthe
2010/5/23 Silvia Pfeiffer :
> I guess I'm starting to talk myself into wanting a more HTML-like
> format than WebSRT that scales to provide full features by using
> existing Web technology. I'd be curious to hear what others think...

I want to use it for slide sync information. Right now I have a
websocket that listens for the file name it wants to show (as next
image slide), but it's IMPOSSIBLE to sync video/slides with the
features that are in browsers now. Because I can't get the real
timestamp from the Ogg Theora video. Also, having that "what slide are
we on" information in the video stream is also rather nice.

If WebSRT had classes, it could be used for all sorts of things. You
would parse the incoming WebSRT-stream in javascript and use
stylesheets to build text overlays like youtube has etc. Always a
tradeoff between easy format and features. If you could optionally put
html in there to do advanced stuff that might work. With some rules
for readers that don't understand 3., just stripping all tags and
showing the result; or even have a method for saying "don't show this
if you don't support fancy stuff". I might be trying to mix this in a
bad way.

Anyway, as of now I'm just waiting for a way to tell my webapp what
slide we're on (sync with the live streaming video).

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-18 Thread Odin Omdal Hørthe
On Tue, May 18, 2010 at 1:00 AM, Nikita Eelen  wrote:
> I think he means something similar to what QuickTime broadcaster and 
> quicktime streaming
> server does with a delay on a live stream or wowza media server with flash 
> media encoder
> when using h.264, unless I am misunderstanding something. Is that correct 
> Odin? Not sure
> how ice cast deals with it but I bet it's a similar issue,

Yes, I initially used Darwin Streaming Server, but found Icecast2 much
better for *my use*. So I use it in the same way. I'm having Icecast
buffer 1MB worth of data so that it can burst all that to the client
(the browser in this case) so that its own buffering can go faster. So
even there we're quite far behind.

And also, the browsers often stops up a few seconds, and buffers a bit
more, and then continue playing (although they have buffered more than
a few seconds ahead already!), so then they are drifting even further
away from real time.



But I have important news, my bug at Mozilla was closed because they
mean it's actually in the spec already. Because:

> The startTime  attribute must, on getting, return the earliest possible 
> position, expressed in seconds.

And they mean that in a live stream, that would be when I started the
stream (like VLC does). So that the stream in-browser already shows
00:31:30 if we're 31 minutes and 30 seconds into the live stream.

So actually, then the spec is good enough for my uses for synchronising.

You may watch this mozilla bug here:
<https://bugzilla.mozilla.org/show_bug.cgi?id=498253>


However, I think that it's rather hard to find out what the spec
means. Because *earliest POSSIBLE*. What is meant by possible? With
live streaming it is not possible to go further back in the stream.
What do you think? What is meant by this? If it does not help me, then
adding a field for getting the _real_ time code data from the video
would be very usable.

It's talked about in this example:
<http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#dom-media-starttime>

> For example, if two clips have been concatenated into one video file, but the 
> video format
> exposes the original times for the two clips, the video data might expose a 
> timeline that
> goes, say, 00:15..00:29 and then 00:05..00:38. However, the user agent would 
> not expose
> those times; it would instead expose the times as 00:15..00:29 and 
> 00:29..01:02, as a
> single video.

That's well and good, but it would be nice to get the actual time code
data for live streaming and these syncing uses if startTime is not the
earliest time that exists.


Justin Dolske's idea looks rather nice:
> This seems like a somewhat unfortunate thing for the spec, I bet everyone's
> going to get it wrong because it won't be common. :( I can't help but wonder 
> if
> it would be better to have a startTimeOffset property, so that .currentTime et
> al are all still have a timeline starting from 0, and if you want the "real"
> time you'd use .currentTime + .startTimeOffset.
>
> I'd also suspect we'll want the default video controls to normalize everything
> to 0 (.currentTime - .startTime), since it would be really confusing 
> otherwise.

from <https://bugzilla.mozilla.org/show_bug.cgi?id=498253#c3>

-- 
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no


[whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-17 Thread Odin Omdal Hørthe
Hello!

I filed bugs at mozilla and in chromium because I want to sync real
time data stream to live video. Some of them told me to send it here
as well. :-)

It's only possible to get relative playtime with html5 in javascript. I
want absolute timestamp that's embeded in OGG.

The spec only deals with relative times, and not getting out
information from the

Here's the deal:
I stream conferences using Ogg Theora+Vorbis using Icecast2. I have built a
site that shows the video and then automatically shows the slides (as PNG
files) as well. I use orbited (COMET) to have the server PUSH my «next»
presses on my keyboard.

The problem is that icecast does heavy buffering, and also the client, so
that while I switch the slides, the browser will go from slide 3 to 4 WAY
too early (from 10 second to 1 minute).

If I could get the timestamp OR time-since-started-sending/recording from
the ogg file in javascript, I'd be able to sync everything.

There are multiple way to sync this, may even an stream with the slide-data
INSIDE the ogg file, however, AFAIK there's also no way of getting out such
arbitrary streams.

(PS: I had some problems, so sorry if you get this email many times! :-S)

--
Beste helsing,
Odin Hørthe Omdal 
http://velmont.no