Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-06 Thread Silvia Pfeiffer
On Tue, Jul 6, 2010 at 2:45 PM, Marques Johansson  wrote:
> On Wed, Jun 30, 2010 at 8:11 AM, Silvia Pfeiffer 
> wrote:
>>
>> Hi all,
>>
>> The W3C WG for media fragments has published a Last Call Working Draft
>> at http://www.w3.org/TR/media-frags/ .
>>
>> The idea of the spec is to enable addressing sub-parts of audio-visual
>> resources through URIs, such as http://example.com/video.ogv?t=10,40
>> to address seconds 10-40 out of video.ogv. This is relevant for use in
>> the  and  elements and can help focus the playback to a
>> specific subpart.
>
>
> When dealing with timed content - shouldn't there be a relative URI meaning
> from the current time to the designated time.  I'm thinking of something
> like:
> http://example.com/video.ogv#t=,40
> Which would be used to continue a piece a playing media up to the 40 second
> point and then stop.
> This could prevent a fetch by the UA to the start of the media fragment
> which could be especially useful if the media is marked as no-cache.
> I'm thinking an article could outline links on a page each of which would
> cause a related video to continue playing up to the point specified in the
> link and then stop - giving the reader a chance to catch up.
> This also brings up the matter of link targets.  Shouldn't I be able to do
> something like this:
> 
> Next Slide
>

Wouldn't they all have a start time, too? E.g. the start time of a
slide to the end time of the slide?

When you write a Web page, there is no such thing as "now" on the page
- all you can reasonably assume when loading a media resource is to
start at 0, so I cannot see your links work in the way that you're
asking for. You can of course always do what you want with JavaScript
though.

Cheers,
Silvia.


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-06 Thread Marques Johansson
On Wed, Jun 30, 2010 at 8:11 AM, Silvia Pfeiffer
wrote:

> Hi all,
>
> The W3C WG for media fragments has published a Last Call Working Draft
> at http://www.w3.org/TR/media-frags/ .
>
> The idea of the spec is to enable addressing sub-parts of audio-visual
> resources through URIs, such as http://example.com/video.ogv?t=10,40
> to address seconds 10-40 out of video.ogv. This is relevant for use in
> the  and  elements and can help focus the playback to a
> specific subpart.
>


When dealing with timed content - shouldn't there be a relative URI meaning
from the current time to the designated time.  I'm thinking of something
like:
http://example.com/video.ogv#t=,40
Which would be used to continue a piece a playing media up to the 40 second
point and then stop.

This could prevent a fetch by the UA to the start of the media fragment
which could be especially useful if the media is marked as no-cache.

I'm thinking an article could outline links on a page each of which would
cause a related video to continue playing up to the point specified in the
link and then stop - giving the reader a chance to catch up.

This also brings up the matter of link targets.  Shouldn't I be able to do
something like this:


Next Slide


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-04 Thread Bjartur Thorlacius
Aryeh Gregor  wrote:
> On Sun, Jul 4, 2010 at 9:19 AM, Silvia Pfeiffer
>  wrote:
> > All of the image formats that you are pointing out have an image mime
> > type. I am merely pointing out that to support ogg theora browsers
> > would need to support a video mime type in an  element. I don't
> > see that as the intention of the  element, in particular since
> >  elements do not have transport controls and the like. Otherwise,
> > why did we create a  element in the first place.
> 
> I'd expect that a video in  would behave like an animated GIF --
> no sound, no APIs to control playback, no browser-provided controls.

Is the no controls part specified by the spec? A MUST in that regard
seems plain wrong. UI shall be implementation defined.

The  element is there to allow "webapp" authors to control the
playback of video they link to in ways that may make even less sense
for different media types and to allow linking to multiple related
media streams.


> You might want this sometimes, especially if you're only selecting one
> frame.  Animated images are conceptually different from videos, and
> there's no reason you couldn't support the same format for both 
> and , with those different semantics.  It would be particularly
> useful to support video frames as images in places where  can't
> be used, like for the  attribute, CSS backgrounds, and
> so on.  The video MIME type does not conflict at all with allowing
> this kind of usage.
> 
> So to cover this use-case, it would be good if there were a way of
> explicitly selecting one frame, which could be treated as a video that
> contains only one frame.  This might, in turn, be accepted by some
> browsers in places where they accept images.  You could do this by
> explicitly allowing syntax like #t=10,10, where the start point equals
> the end point, as selecting only one frame.  (But I guess this could
> be emulated by #t=10,10.001 or something, assuming the frame starts at
> exactly t=10.)
IMO single frames should be encoded as frames/images rather than full
blown videos consisting of a single frame. The format that the video
codec uses for (golden) frames. In the case of fragments the frame will
of course have to be extracted dynamically on the client side.


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-04 Thread Silvia Pfeiffer
On Mon, Jul 5, 2010 at 4:23 AM, timeless  wrote:
> On Sun, Jul 4, 2010 at 4:19 PM, Silvia Pfeiffer
>  wrote:
>> Note that I do understand the need and am trying to explain how it can
>> be made to work. Also I am trying to show that what might look as the
>> simplest approach won't work and why.
>
> It doesn't have to be made to work that way, which is the point that
> the others were trying to make.
>
> http://www.w3.org/TR/REC-html40/struct/objects.html#adef-src-IMG
> src = uri [CT]     This attribute specifies the location of the image
> resource. Examples of widely recognized image formats include GIF,
> JPEG, and PNG.
>
> Nothing in the definition here says "the img tag only allows mime
> types of image/*"
>
> http://www.whatwg.org/specs/web-apps/current-work/multipage/embedded-content-1.html#attr-img-src
>
> "The src attribute must be present, and must contain a valid non-empty
> URL potentially surrounded by spaces referencing a non-interactive,
> optionally animated, image resource that is neither paged nor
> scripted.
>
> Images can thus be static bitmaps (e.g. PNGs, GIFs, JPEGs),
> single-page vector documents (single-page PDFs, XML files with an SVG
> root element), animated bitmaps (APNGs, animated GIFs), animated
> vector graphics (XML files with an SVG root element that use
> declarative SMIL animation), and so forth. However, this also
> precludes SVG files with script, multipage PDF files, interactive MNG
> files, HTML documents, plain text documents, and so forth."
>
> While there is text in the html5 definition which precludes scripts,
> it too doesn't explicitly limit the range to image/*, and in fact I
> believe since the PDF mime type is application/pdf, it's safe to say
> that browsers do render things which are not image/*.
>
> In testing, although Chrome, Safari, Opera, and Minefield (after bug
> 276431) support image/svg+xml today none support application/svg+xml.
> However, as Safari supports application/pdf, the cat's out of the bag
> on non image/ mime types.
>
> http://www.webwizardry.net/~timeless/svg/276431.html
>
>> All of the image formats that you are pointing out have an image mime
>> type.
>
> I should have listed PDF which doesn't, mia culpa -- It is in the
> HTML5 specification as a suggestion as noted above in this reply.
>
>> I am merely pointing out that to support ogg theora browsers
>> would need to support a video mime type in an  element.
>
> You didn't point that out, you suggested that instead servers would
> have to do content conversions.
>
>> I don't see that as the intention of the  element, in particular since
>>  elements do not have transport controls and the like.
>
> html5: "An img element represents an image.", that's all the proposal
> wants, an image, a non interactive image (possibly animated), and it's
> possible to decode an ogg video in a way which achieves this.
>
>> Otherwise, why did we create a  element in the first place.
>
> http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#video
>
> html5: "A video element is used for playing videos or movies."
> "User agents should provide controls to enable or disable the display
> of closed captions, audio description tracks, and other additional
> data associated with the video stream, though such features should,
> again, not interfere with the page's normal rendering.
>
> User agents may allow users to view the video content in manners more
> suitable to the user (e.g. full-screen or in an independent resizable
> window). As for the other user interface features, controls to enable
> this should not interfere with the page's normal rendering unless the
> user agent is exposing a user interface. In such an independent
> context, however, user agents may make full user interfaces visible,
> with, e.g., play, pause, seeking, and volume controls, even if the
> controls attribute is absent."
>
> Video offers video controls, the suggestions which you were presented
> were clearly instances where people just wanted animated frames
> without such controls.
>
>> So, I am just pointing out that with current  element
>> implementations and with the existing intentions of  elements (as
>> opposed to  elements), using a segment of Ogg Theora video as
>> defined through a media fragment URI will not work as an image
>> resource and will also not work as a video resource.
>
> In order to support media fragments, media fragment support would have
> to be implemented. This is obvious to everyone. Similarly, adding
> support for ogg in  would require adding support for ogg in
> , just as adding support for svg in  requires adding support
> for svg in . Of note, since SVG is already supported by most
> browsers, the incremental cost of adding svg support in  is
> relatively low (as seen demonstrated by
>  which adds it to
> Gecko).
>
> This is in contrast with the cost of adding media fragment support,
> which is essentially entirely new code. 

Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-04 Thread Silvia Pfeiffer
On Mon, Jul 5, 2010 at 2:46 AM, Aryeh Gregor  wrote:
> On Sun, Jul 4, 2010 at 9:19 AM, Silvia Pfeiffer
>  wrote:
>> All of the image formats that you are pointing out have an image mime
>> type. I am merely pointing out that to support ogg theora browsers
>> would need to support a video mime type in an  element. I don't
>> see that as the intention of the  element, in particular since
>>  elements do not have transport controls and the like. Otherwise,
>> why did we create a  element in the first place.
>
> I'd expect that a video in  would behave like an animated GIF --
> no sound, no APIs to control playback, no browser-provided controls.
> You might want this sometimes, especially if you're only selecting one
> frame.  Animated images are conceptually different from videos, and
> there's no reason you couldn't support the same format for both 
> and , with those different semantics.  It would be particularly
> useful to support video frames as images in places where  can't
> be used, like for the  attribute, CSS backgrounds, and
> so on.  The video MIME type does not conflict at all with allowing
> this kind of usage.
>
> So to cover this use-case, it would be good if there were a way of
> explicitly selecting one frame, which could be treated as a video that
> contains only one frame.  This might, in turn, be accepted by some
> browsers in places where they accept images.  You could do this by
> explicitly allowing syntax like #t=10,10, where the start point equals
> the end point, as selecting only one frame.  (But I guess this could
> be emulated by #t=10,10.001 or something, assuming the frame starts at
> exactly t=10.)

The issue with #t=10,10 is that the semantics of the interval are that
of a semi-open interval: the start point is in and the end point is
out. This has been the traditional understanding of an interval
related to video (e.g. RTSP defines it that way too). Thus,
"video.ogv#t=10,10" is like asking from byte range 50 to 50 - it's
simply empty.

Further, there are complications with extracting a single frame from a
video since not every point in time will map onto a keyframe, but most
will map onto intra frames, i.e. non-complete frames. Thus, if you ask
for #t=10,10.001, you will most likely receive a region around that
time segment that is a decodable video byte range - maybe a region
that maps to #t=9.02, 12.4 - the UA will know what it asked for and be
able to display only the actually requested part in the middle after
decoding all the bits.

To repeat: I am not convinced it is a good idea to support the video
mime type in an  element, even if we change the semantics and
ignore the audio etc. I am not saying it is not possible, I am just
saying that I would not recommend it and would suggest to rather do it
on the server with some transcoding action - it is really not
difficult to install ffmpeg or mplayer on the server, develop a query
format that delivers keyframes from a particular time offset, and do a
keyframe dump on the server upon request. You might want to cache the
result, too.

Cheers,
Silvia.


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-04 Thread timeless
On Sun, Jul 4, 2010 at 4:19 PM, Silvia Pfeiffer
 wrote:
> Note that I do understand the need and am trying to explain how it can
> be made to work. Also I am trying to show that what might look as the
> simplest approach won't work and why.

It doesn't have to be made to work that way, which is the point that
the others were trying to make.

http://www.w3.org/TR/REC-html40/struct/objects.html#adef-src-IMG
src = uri [CT] This attribute specifies the location of the image
resource. Examples of widely recognized image formats include GIF,
JPEG, and PNG.

Nothing in the definition here says "the img tag only allows mime
types of image/*"

http://www.whatwg.org/specs/web-apps/current-work/multipage/embedded-content-1.html#attr-img-src

"The src attribute must be present, and must contain a valid non-empty
URL potentially surrounded by spaces referencing a non-interactive,
optionally animated, image resource that is neither paged nor
scripted.

Images can thus be static bitmaps (e.g. PNGs, GIFs, JPEGs),
single-page vector documents (single-page PDFs, XML files with an SVG
root element), animated bitmaps (APNGs, animated GIFs), animated
vector graphics (XML files with an SVG root element that use
declarative SMIL animation), and so forth. However, this also
precludes SVG files with script, multipage PDF files, interactive MNG
files, HTML documents, plain text documents, and so forth."

While there is text in the html5 definition which precludes scripts,
it too doesn't explicitly limit the range to image/*, and in fact I
believe since the PDF mime type is application/pdf, it's safe to say
that browsers do render things which are not image/*.

In testing, although Chrome, Safari, Opera, and Minefield (after bug
276431) support image/svg+xml today none support application/svg+xml.
However, as Safari supports application/pdf, the cat's out of the bag
on non image/ mime types.

http://www.webwizardry.net/~timeless/svg/276431.html

> All of the image formats that you are pointing out have an image mime
> type.

I should have listed PDF which doesn't, mia culpa -- It is in the
HTML5 specification as a suggestion as noted above in this reply.

> I am merely pointing out that to support ogg theora browsers
> would need to support a video mime type in an  element.

You didn't point that out, you suggested that instead servers would
have to do content conversions.

> I don't see that as the intention of the  element, in particular since
>  elements do not have transport controls and the like.

html5: "An img element represents an image.", that's all the proposal
wants, an image, a non interactive image (possibly animated), and it's
possible to decode an ogg video in a way which achieves this.

> Otherwise, why did we create a  element in the first place.

http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#video

html5: "A video element is used for playing videos or movies."
"User agents should provide controls to enable or disable the display
of closed captions, audio description tracks, and other additional
data associated with the video stream, though such features should,
again, not interfere with the page's normal rendering.

User agents may allow users to view the video content in manners more
suitable to the user (e.g. full-screen or in an independent resizable
window). As for the other user interface features, controls to enable
this should not interfere with the page's normal rendering unless the
user agent is exposing a user interface. In such an independent
context, however, user agents may make full user interfaces visible,
with, e.g., play, pause, seeking, and volume controls, even if the
controls attribute is absent."

Video offers video controls, the suggestions which you were presented
were clearly instances where people just wanted animated frames
without such controls.

> So, I am just pointing out that with current  element
> implementations and with the existing intentions of  elements (as
> opposed to  elements), using a segment of Ogg Theora video as
> defined through a media fragment URI will not work as an image
> resource and will also not work as a video resource.

In order to support media fragments, media fragment support would have
to be implemented. This is obvious to everyone. Similarly, adding
support for ogg in  would require adding support for ogg in
, just as adding support for svg in  requires adding support
for svg in . Of note, since SVG is already supported by most
browsers, the incremental cost of adding svg support in  is
relatively low (as seen demonstrated by
 which adds it to
Gecko).

This is in contrast with the cost of adding media fragment support,
which is essentially entirely new code. But once it's there, the cost
of letting it work in an  tag would not be very high.

Again, I'm not saying it should be done, but you've chosen to ignore
how it could work and until your last reply suggested an alternat

Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-04 Thread Aryeh Gregor
On Sun, Jul 4, 2010 at 9:19 AM, Silvia Pfeiffer
 wrote:
> All of the image formats that you are pointing out have an image mime
> type. I am merely pointing out that to support ogg theora browsers
> would need to support a video mime type in an  element. I don't
> see that as the intention of the  element, in particular since
>  elements do not have transport controls and the like. Otherwise,
> why did we create a  element in the first place.

I'd expect that a video in  would behave like an animated GIF --
no sound, no APIs to control playback, no browser-provided controls.
You might want this sometimes, especially if you're only selecting one
frame.  Animated images are conceptually different from videos, and
there's no reason you couldn't support the same format for both 
and , with those different semantics.  It would be particularly
useful to support video frames as images in places where  can't
be used, like for the  attribute, CSS backgrounds, and
so on.  The video MIME type does not conflict at all with allowing
this kind of usage.

So to cover this use-case, it would be good if there were a way of
explicitly selecting one frame, which could be treated as a video that
contains only one frame.  This might, in turn, be accepted by some
browsers in places where they accept images.  You could do this by
explicitly allowing syntax like #t=10,10, where the start point equals
the end point, as selecting only one frame.  (But I guess this could
be emulated by #t=10,10.001 or something, assuming the frame starts at
exactly t=10.)


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-04 Thread Silvia Pfeiffer
On Sun, Jul 4, 2010 at 9:43 PM, timeless  wrote:
> On Sun, Jul 4, 2010 at 5:26 AM, silviapfeiffer1
>  wrote:
>> It doesn't actually matter what element the URI appears in - your
>> element has to deal with the data that it receives and if
>> "file.ogv#t=1:00,1:15" is an Ogg Theora segment out of a video, then
>> that is what the  element receives.
>
> right.
>
>> I am
>> not aware though of any  element implementation that can deal
>> with Ogg Theora video.
>
> That's changeable. And you seem to be totally ignoring that this is
> the thrust of someone else's request.
>
> Note that I'm not the one asking for it. Just trying to explain it to
> you since you seem to be doing a good job of totally missing the
> point.


Same here. ;-)

Note that I do understand the need and am trying to explain how it can
be made to work. Also I am trying to show that what might look as the
simplest approach won't work and why.



>> If you are, however, asking to turn the Ogg Theora video into a APNG
>> or a animated GIF or even a MNG, there will need to be a transcoding
>> step at the server,
>
> No no.
>
> If a browser can decode a frame or sequence of frames from an ogg,
> then it can display them, and since it can display various image
> formats in  (jpg, gif, png, apng, potentially mng, and in future
> geckos SVG), then someone (not me!) is merely suggesting that ogg be
> another one, either as a single frame or an animated sequence.

All of the image formats that you are pointing out have an image mime
type. I am merely pointing out that to support ogg theora browsers
would need to support a video mime type in an  element. I don't
see that as the intention of the  element, in particular since
 elements do not have transport controls and the like. Otherwise,
why did we create a  element in the first place.

So, I am just pointing out that with current  element
implementations and with the existing intentions of  elements (as
opposed to  elements), using a segment of Ogg Theora video as
defined through a media fragment URI will not work as an image
resource and will also not work as a video resource.


Hope that clarifies it.

Regards,
Silvia.


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-04 Thread timeless
On Sun, Jul 4, 2010 at 5:26 AM, silviapfeiffer1
 wrote:
> It doesn't actually matter what element the URI appears in - your
> element has to deal with the data that it receives and if
> "file.ogv#t=1:00,1:15" is an Ogg Theora segment out of a video, then
> that is what the  element receives.

right.

> I am
> not aware though of any  element implementation that can deal
> with Ogg Theora video.

That's changeable. And you seem to be totally ignoring that this is
the thrust of someone else's request.

Note that I'm not the one asking for it. Just trying to explain it to
you since you seem to be doing a good job of totally missing the
point.

> If you are, however, asking to turn the Ogg Theora video into a APNG
> or a animated GIF or even a MNG, there will need to be a transcoding
> step at the server,

No no.

If a browser can decode a frame or sequence of frames from an ogg,
then it can display them, and since it can display various image
formats in  (jpg, gif, png, apng, potentially mng, and in future
geckos SVG), then someone (not me!) is merely suggesting that ogg be
another one, either as a single frame or an animated sequence.


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-03 Thread silviapfeiffer1
Hi Jonas,

You  may also be interested to check out the presentation at
http://www.w3.org/2008/WebVideo/Fragments/talks/2010-06-30-Jakub_Sendor-Media_Fragment_Firefox_Plugin.pdf
which has a screenshot in it of the Firefox plugin.

Will do the screencast this week.

Cheers,
Silvia.

On Fri, Jul 2, 2010 at 7:31 PM, Jonas Sicking  wrote:
> On Fri, Jul 2, 2010 at 1:55 AM, Silvia Pfeiffer
>  wrote:
>> Actually, a point in time is nothing - it's an empty set. You never
>> want to actually point to a point in time, but rather to either the
>> point in time and an interval after that point in time, or everything
>> from that point onwards. That's what these URIs represent.
>
> I'm not sure I agree, but to avoid meta-discussions I'll just wait to
> see the suggested UI.
>
> / Jonas
>


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-03 Thread silviapfeiffer1
On Sun, Jul 4, 2010 at 12:07 PM, timeless  wrote:
> On Fri, Jul 2, 2010 at 4:20 PM, Silvia Pfeiffer
>  wrote:
>> This latter one is already defined as a 5 sec video extract from the
>> full file.ogv - it's not possible to overload that with turning the
>> byte range into an animated gif.
>
> So,  isn't restricted to animated GIFs, Mozilla supports animated
> PNGs (APNG), and some browsers support MNG.
>
> To some extent, there's nothing wrong with using a video file as an
> image, or an animated image.
>
> I'm not absolutely certain i'd want to see it used this way, but I can
> understand why someone would ask for it.
>
>> You will also need to use transcoding
>> for this and thus will want to create a new URI query scheme.
>
> As  doesn't require that animations be GIF based, there's no need
> for transcoding. There would be a requirement for browsers to choose
> to support the proposed format, but ...


It doesn't actually matter what element the URI appears in - your
element has to deal with the data that it receives and if
"file.ogv#t=1:00,1:15" is an Ogg Theora segment out of a video, then
that is what the  element receives.

Of course you can call any file any name, so if "file.ogv" is an
animated gif or an animated png (contrary to what your file extension
indicates), then your  element may be able to deal with it. I am
not aware though of any  element implementation that can deal
with Ogg Theora video.

If you are, however, asking to turn the Ogg Theora video into a APNG
or a animated GIF or even a MNG, there will need to be a transcoding
step at the server, which means you have to change the mime type of
your resource "file.ogv", which means, it cannot continue to be called
"file.ogv". So, if you really want to do it this way, you can only use
a URI query to transcode your resource into a different mime type and
would need to use a URI such as e.g.
"file.ogv?t=1:00,1:15&target=apng" such that the server knows what
mime type to return instead of the original one.

The gist of the issue is: A fragment ("#") addition to a URI is not
allowed to change the mime type of the resource, since it only points
to a subresource, i.e. basically only to a byte range of the original
resource. But you can do anything to your resource if you use a URI
query ("?").

Cheers,
Silvia.


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-03 Thread timeless
On Fri, Jul 2, 2010 at 4:20 PM, Silvia Pfeiffer
 wrote:
> This latter one is already defined as a 5 sec video extract from the
> full file.ogv - it's not possible to overload that with turning the
> byte range into an animated gif.

So,  isn't restricted to animated GIFs, Mozilla supports animated
PNGs (APNG), and some browsers support MNG.

To some extent, there's nothing wrong with using a video file as an
image, or an animated image.

I'm not absolutely certain i'd want to see it used this way, but I can
understand why someone would ask for it.

> You will also need to use transcoding
> for this and thus will want to create a new URI query scheme.

As  doesn't require that animations be GIF based, there's no need
for transcoding. There would be a requirement for browsers to choose
to support the proposed format, but ...


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-02 Thread Silvia Pfeiffer
The idea or returning a single image for a URI that points at a video
has indeed been discussed. It is not possible to do with a URI
fragment, since a URI fragment (#) can only return the same mime type
as the main URI. But the suggestion for this is to use a URI query.
Then, it is possible to return an image. A URI scheme such as
video.ogv?image=64 could be used to provide this (or anything else
that your server will make up and provide to clients). Since providing
an image in return for a video URI requires some form of
"transcoding", a URI query is the only way to realise this.

On Fri, Jul 2, 2010 at 8:18 PM, Marques Johansson  wrote:
> A point in time, if it relates to an I-frame, is very small set and it
> represents an image.
>
> It would be interesting to have
> 
>
> or animated images in the sense of:
> 

(Note: I'm sure you meant s/ogg/ogv/ so we can talk about video files.)

This latter one is already defined as a 5 sec video extract from the
full file.ogv - it's not possible to overload that with turning the
byte range into an animated gif. You will also need to use transcoding
for this and thus will want to create a new URI query scheme.


> I think the earlier post was looking to display video thumbnails using
> this sort of fragment notation.
>
> If the video wasn't being played I would hope that a browser would
> fetch the meta data first and then just seek the byte ranges for that
> fragment.

The whole media fragment URI spec is based on retrieving byte ranges.
I'd encourage you to read it and see if it matches your expectations.

Cheers,
Silvia.


> On Fri, Jul 2, 2010 at 4:55 AM, Silvia Pfeiffer
>  wrote:
>> Actually, a point in time is nothing - it's an empty set. You never
>> want to actually point to a point in time, but rather to either the
>> point in time and an interval after that point in time, or everything
>> from that point onwards. That's what these URIs represent.
>>
>> Cheers,
>> Silvia.
>>
>> On Fri, Jul 2, 2010 at 7:56 AM, Jonas Sicking  wrote:
>>> That would be great. I guess it's unclear to me how the UIs would differ for
>>>
>>> video.ogv#t=40,50
>>> and
>>> video.ogv#t=40
>>>
>>> In particular it seems strange to me that video.ogv#t=40 represents
>>> the whole range from the selected point to the end of the video, given
>>> that most commonly when wanting to point out a particular point in a
>>> video you actually just want to represent a point.
>>>
>>> / Jonas
>>>
>>> On Thu, Jul 1, 2010 at 2:46 AM, Silvia Pfeiffer
>>>  wrote:
 BTW: I will try and make a screencast of that firefox plugin, which
 should clarify things further. Stay tuned...
 Cheers,
 Silvia.


 On Thu, Jul 1, 2010 at 7:44 PM, Silvia Pfeiffer
  wrote:
> Hi Jonas,
>
> On Thu, Jul 1, 2010 at 4:41 AM, Jonas Sicking  wrote:
>> Hi Silvia,
>>
>> Back in may last year I brought [1] up the fact that there are two use
>> cases for temporal media fragments:
>>
>> 1. Skipping to a particular point in a longer resource, such as
>> wanting to start a video at a particular point while still allowing
>> seeking in the entire resource. This is currently supported by for
>> example YouTube [2]. It is also the model used for web pages where
>> including a fragment identifier only scrolls to a particular point,
>> while allowing the user to scroll to any point both before and after
>> the identified fragment.
>>
>> 2. Only displaying part of a video. For example out of a longer video
>> from a discussion panel, only displaying the part where a specific
>> topic is discussed.
>>
>> While there seemed to be agreement [3][4] that these are in fact two
>> separate use cases, it seems like the media fragments draft is only
>> attempting to address one. Additionally, it only addresses the one
>> that has the least precedence as far as current technologies on the
>> web goes.
>>
>> Was this an intentional omission? Is it planned to solve use case 1
>> above in a future revision?
>>
>> [1] 
>> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019596.html
>> [2] http://www.youtube.com/watch?v=fyQrKvc7_NU#t=201
>> [3] 
>> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019718.html
>> [4] 
>> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019721.html
>
>
> I think you may have misunderstood the specification. Use case 1 is
> indeed the main use case being addressed in the specification. There
> is a Firefox plugin implementation[1] of the specification that shows
> exactly use case 1 in a video element - a URI with a fragment such as
> video.ogv#t=40,50 is being included in a  element and the
> effect is that the video is displayed from 40s to 50s, but the
> transport bar (or controls) are still those of the complete resource,
> so you can still seek to any position.
>
> To

Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-02 Thread Marques Johansson
A point in time, if it relates to an I-frame, is very small set and it
represents an image.

It would be interesting to have


or animated images in the sense of:


I think the earlier post was looking to display video thumbnails using
this sort of fragment notation.

If the video wasn't being played I would hope that a browser would
fetch the meta data first and then just seek the byte ranges for that
fragment.

On Fri, Jul 2, 2010 at 4:55 AM, Silvia Pfeiffer
 wrote:
> Actually, a point in time is nothing - it's an empty set. You never
> want to actually point to a point in time, but rather to either the
> point in time and an interval after that point in time, or everything
> from that point onwards. That's what these URIs represent.
>
> Cheers,
> Silvia.
>
> On Fri, Jul 2, 2010 at 7:56 AM, Jonas Sicking  wrote:
>> That would be great. I guess it's unclear to me how the UIs would differ for
>>
>> video.ogv#t=40,50
>> and
>> video.ogv#t=40
>>
>> In particular it seems strange to me that video.ogv#t=40 represents
>> the whole range from the selected point to the end of the video, given
>> that most commonly when wanting to point out a particular point in a
>> video you actually just want to represent a point.
>>
>> / Jonas
>>
>> On Thu, Jul 1, 2010 at 2:46 AM, Silvia Pfeiffer
>>  wrote:
>>> BTW: I will try and make a screencast of that firefox plugin, which
>>> should clarify things further. Stay tuned...
>>> Cheers,
>>> Silvia.
>>>
>>>
>>> On Thu, Jul 1, 2010 at 7:44 PM, Silvia Pfeiffer
>>>  wrote:
 Hi Jonas,

 On Thu, Jul 1, 2010 at 4:41 AM, Jonas Sicking  wrote:
> Hi Silvia,
>
> Back in may last year I brought [1] up the fact that there are two use
> cases for temporal media fragments:
>
> 1. Skipping to a particular point in a longer resource, such as
> wanting to start a video at a particular point while still allowing
> seeking in the entire resource. This is currently supported by for
> example YouTube [2]. It is also the model used for web pages where
> including a fragment identifier only scrolls to a particular point,
> while allowing the user to scroll to any point both before and after
> the identified fragment.
>
> 2. Only displaying part of a video. For example out of a longer video
> from a discussion panel, only displaying the part where a specific
> topic is discussed.
>
> While there seemed to be agreement [3][4] that these are in fact two
> separate use cases, it seems like the media fragments draft is only
> attempting to address one. Additionally, it only addresses the one
> that has the least precedence as far as current technologies on the
> web goes.
>
> Was this an intentional omission? Is it planned to solve use case 1
> above in a future revision?
>
> [1] 
> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019596.html
> [2] http://www.youtube.com/watch?v=fyQrKvc7_NU#t=201
> [3] 
> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019718.html
> [4] 
> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019721.html


 I think you may have misunderstood the specification. Use case 1 is
 indeed the main use case being addressed in the specification. There
 is a Firefox plugin implementation[1] of the specification that shows
 exactly use case 1 in a video element - a URI with a fragment such as
 video.ogv#t=40,50 is being included in a  element and the
 effect is that the video is displayed from 40s to 50s, but the
 transport bar (or controls) are still those of the complete resource,
 so you can still seek to any position.

 To be sure, this is just a recommendation of how it is supposed to be
 implemented (see
 http://www.w3.org/TR/media-frags/#media-fragment-display). The group
 only defined what URIs look like that point to such a segment - it
 cannot prescribe what an application (such as a HTML document) does
 with the URI. I would propose that this discussion should be had about
 HTML5 and a sentence be added to the HTML5 spec on how UAs are
 expected to deal with such segments.

 Further, if you are indeed only interested in a subpart of the
 original media resource and want to completely blend out all context
 (i.e. all other bits of the media resource), you should be using the
 URI query addressing method instead of the URI fragment, e.g.
 video.ogv?t=40,50. This URI is supposed to create a new resource that
 consist only of the segment - it will only do so, of course, if your
 server supports this functionality.

 All of this is described in more detail in the spec [2]. If that is
 unclear or anything is confusing, please do point it out so it can be
 fixed.

 Best Regards,
 Silvia.



 [1] http://www.w3.org/2008/WebVideo/Fragments/code/plugin/ (expec

Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-02 Thread Jonas Sicking
On Fri, Jul 2, 2010 at 1:55 AM, Silvia Pfeiffer
 wrote:
> Actually, a point in time is nothing - it's an empty set. You never
> want to actually point to a point in time, but rather to either the
> point in time and an interval after that point in time, or everything
> from that point onwards. That's what these URIs represent.

I'm not sure I agree, but to avoid meta-discussions I'll just wait to
see the suggested UI.

/ Jonas


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-02 Thread Silvia Pfeiffer
Actually, a point in time is nothing - it's an empty set. You never
want to actually point to a point in time, but rather to either the
point in time and an interval after that point in time, or everything
from that point onwards. That's what these URIs represent.

Cheers,
Silvia.

On Fri, Jul 2, 2010 at 7:56 AM, Jonas Sicking  wrote:
> That would be great. I guess it's unclear to me how the UIs would differ for
>
> video.ogv#t=40,50
> and
> video.ogv#t=40
>
> In particular it seems strange to me that video.ogv#t=40 represents
> the whole range from the selected point to the end of the video, given
> that most commonly when wanting to point out a particular point in a
> video you actually just want to represent a point.
>
> / Jonas
>
> On Thu, Jul 1, 2010 at 2:46 AM, Silvia Pfeiffer
>  wrote:
>> BTW: I will try and make a screencast of that firefox plugin, which
>> should clarify things further. Stay tuned...
>> Cheers,
>> Silvia.
>>
>>
>> On Thu, Jul 1, 2010 at 7:44 PM, Silvia Pfeiffer
>>  wrote:
>>> Hi Jonas,
>>>
>>> On Thu, Jul 1, 2010 at 4:41 AM, Jonas Sicking  wrote:
 Hi Silvia,

 Back in may last year I brought [1] up the fact that there are two use
 cases for temporal media fragments:

 1. Skipping to a particular point in a longer resource, such as
 wanting to start a video at a particular point while still allowing
 seeking in the entire resource. This is currently supported by for
 example YouTube [2]. It is also the model used for web pages where
 including a fragment identifier only scrolls to a particular point,
 while allowing the user to scroll to any point both before and after
 the identified fragment.

 2. Only displaying part of a video. For example out of a longer video
 from a discussion panel, only displaying the part where a specific
 topic is discussed.

 While there seemed to be agreement [3][4] that these are in fact two
 separate use cases, it seems like the media fragments draft is only
 attempting to address one. Additionally, it only addresses the one
 that has the least precedence as far as current technologies on the
 web goes.

 Was this an intentional omission? Is it planned to solve use case 1
 above in a future revision?

 [1] 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019596.html
 [2] http://www.youtube.com/watch?v=fyQrKvc7_NU#t=201
 [3] 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019718.html
 [4] 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019721.html
>>>
>>>
>>> I think you may have misunderstood the specification. Use case 1 is
>>> indeed the main use case being addressed in the specification. There
>>> is a Firefox plugin implementation[1] of the specification that shows
>>> exactly use case 1 in a video element - a URI with a fragment such as
>>> video.ogv#t=40,50 is being included in a  element and the
>>> effect is that the video is displayed from 40s to 50s, but the
>>> transport bar (or controls) are still those of the complete resource,
>>> so you can still seek to any position.
>>>
>>> To be sure, this is just a recommendation of how it is supposed to be
>>> implemented (see
>>> http://www.w3.org/TR/media-frags/#media-fragment-display). The group
>>> only defined what URIs look like that point to such a segment - it
>>> cannot prescribe what an application (such as a HTML document) does
>>> with the URI. I would propose that this discussion should be had about
>>> HTML5 and a sentence be added to the HTML5 spec on how UAs are
>>> expected to deal with such segments.
>>>
>>> Further, if you are indeed only interested in a subpart of the
>>> original media resource and want to completely blend out all context
>>> (i.e. all other bits of the media resource), you should be using the
>>> URI query addressing method instead of the URI fragment, e.g.
>>> video.ogv?t=40,50. This URI is supposed to create a new resource that
>>> consist only of the segment - it will only do so, of course, if your
>>> server supports this functionality.
>>>
>>> All of this is described in more detail in the spec [2]. If that is
>>> unclear or anything is confusing, please do point it out so it can be
>>> fixed.
>>>
>>> Best Regards,
>>> Silvia.
>>>
>>>
>>>
>>> [1] http://www.w3.org/2008/WebVideo/Fragments/code/plugin/ (expect some 
>>> bugs)
>>> [2] http://www.w3.org/TR/media-frags/
>>>
>>
>


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-01 Thread Jonas Sicking
That would be great. I guess it's unclear to me how the UIs would differ for

video.ogv#t=40,50
and
video.ogv#t=40

In particular it seems strange to me that video.ogv#t=40 represents
the whole range from the selected point to the end of the video, given
that most commonly when wanting to point out a particular point in a
video you actually just want to represent a point.

/ Jonas

On Thu, Jul 1, 2010 at 2:46 AM, Silvia Pfeiffer
 wrote:
> BTW: I will try and make a screencast of that firefox plugin, which
> should clarify things further. Stay tuned...
> Cheers,
> Silvia.
>
>
> On Thu, Jul 1, 2010 at 7:44 PM, Silvia Pfeiffer
>  wrote:
>> Hi Jonas,
>>
>> On Thu, Jul 1, 2010 at 4:41 AM, Jonas Sicking  wrote:
>>> Hi Silvia,
>>>
>>> Back in may last year I brought [1] up the fact that there are two use
>>> cases for temporal media fragments:
>>>
>>> 1. Skipping to a particular point in a longer resource, such as
>>> wanting to start a video at a particular point while still allowing
>>> seeking in the entire resource. This is currently supported by for
>>> example YouTube [2]. It is also the model used for web pages where
>>> including a fragment identifier only scrolls to a particular point,
>>> while allowing the user to scroll to any point both before and after
>>> the identified fragment.
>>>
>>> 2. Only displaying part of a video. For example out of a longer video
>>> from a discussion panel, only displaying the part where a specific
>>> topic is discussed.
>>>
>>> While there seemed to be agreement [3][4] that these are in fact two
>>> separate use cases, it seems like the media fragments draft is only
>>> attempting to address one. Additionally, it only addresses the one
>>> that has the least precedence as far as current technologies on the
>>> web goes.
>>>
>>> Was this an intentional omission? Is it planned to solve use case 1
>>> above in a future revision?
>>>
>>> [1] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019596.html
>>> [2] http://www.youtube.com/watch?v=fyQrKvc7_NU#t=201
>>> [3] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019718.html
>>> [4] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019721.html
>>
>>
>> I think you may have misunderstood the specification. Use case 1 is
>> indeed the main use case being addressed in the specification. There
>> is a Firefox plugin implementation[1] of the specification that shows
>> exactly use case 1 in a video element - a URI with a fragment such as
>> video.ogv#t=40,50 is being included in a  element and the
>> effect is that the video is displayed from 40s to 50s, but the
>> transport bar (or controls) are still those of the complete resource,
>> so you can still seek to any position.
>>
>> To be sure, this is just a recommendation of how it is supposed to be
>> implemented (see
>> http://www.w3.org/TR/media-frags/#media-fragment-display). The group
>> only defined what URIs look like that point to such a segment - it
>> cannot prescribe what an application (such as a HTML document) does
>> with the URI. I would propose that this discussion should be had about
>> HTML5 and a sentence be added to the HTML5 spec on how UAs are
>> expected to deal with such segments.
>>
>> Further, if you are indeed only interested in a subpart of the
>> original media resource and want to completely blend out all context
>> (i.e. all other bits of the media resource), you should be using the
>> URI query addressing method instead of the URI fragment, e.g.
>> video.ogv?t=40,50. This URI is supposed to create a new resource that
>> consist only of the segment - it will only do so, of course, if your
>> server supports this functionality.
>>
>> All of this is described in more detail in the spec [2]. If that is
>> unclear or anything is confusing, please do point it out so it can be
>> fixed.
>>
>> Best Regards,
>> Silvia.
>>
>>
>>
>> [1] http://www.w3.org/2008/WebVideo/Fragments/code/plugin/ (expect some bugs)
>> [2] http://www.w3.org/TR/media-frags/
>>
>


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-01 Thread Silvia Pfeiffer
BTW: I will try and make a screencast of that firefox plugin, which
should clarify things further. Stay tuned...
Cheers,
Silvia.


On Thu, Jul 1, 2010 at 7:44 PM, Silvia Pfeiffer
 wrote:
> Hi Jonas,
>
> On Thu, Jul 1, 2010 at 4:41 AM, Jonas Sicking  wrote:
>> Hi Silvia,
>>
>> Back in may last year I brought [1] up the fact that there are two use
>> cases for temporal media fragments:
>>
>> 1. Skipping to a particular point in a longer resource, such as
>> wanting to start a video at a particular point while still allowing
>> seeking in the entire resource. This is currently supported by for
>> example YouTube [2]. It is also the model used for web pages where
>> including a fragment identifier only scrolls to a particular point,
>> while allowing the user to scroll to any point both before and after
>> the identified fragment.
>>
>> 2. Only displaying part of a video. For example out of a longer video
>> from a discussion panel, only displaying the part where a specific
>> topic is discussed.
>>
>> While there seemed to be agreement [3][4] that these are in fact two
>> separate use cases, it seems like the media fragments draft is only
>> attempting to address one. Additionally, it only addresses the one
>> that has the least precedence as far as current technologies on the
>> web goes.
>>
>> Was this an intentional omission? Is it planned to solve use case 1
>> above in a future revision?
>>
>> [1] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019596.html
>> [2] http://www.youtube.com/watch?v=fyQrKvc7_NU#t=201
>> [3] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019718.html
>> [4] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019721.html
>
>
> I think you may have misunderstood the specification. Use case 1 is
> indeed the main use case being addressed in the specification. There
> is a Firefox plugin implementation[1] of the specification that shows
> exactly use case 1 in a video element - a URI with a fragment such as
> video.ogv#t=40,50 is being included in a  element and the
> effect is that the video is displayed from 40s to 50s, but the
> transport bar (or controls) are still those of the complete resource,
> so you can still seek to any position.
>
> To be sure, this is just a recommendation of how it is supposed to be
> implemented (see
> http://www.w3.org/TR/media-frags/#media-fragment-display). The group
> only defined what URIs look like that point to such a segment - it
> cannot prescribe what an application (such as a HTML document) does
> with the URI. I would propose that this discussion should be had about
> HTML5 and a sentence be added to the HTML5 spec on how UAs are
> expected to deal with such segments.
>
> Further, if you are indeed only interested in a subpart of the
> original media resource and want to completely blend out all context
> (i.e. all other bits of the media resource), you should be using the
> URI query addressing method instead of the URI fragment, e.g.
> video.ogv?t=40,50. This URI is supposed to create a new resource that
> consist only of the segment - it will only do so, of course, if your
> server supports this functionality.
>
> All of this is described in more detail in the spec [2]. If that is
> unclear or anything is confusing, please do point it out so it can be
> fixed.
>
> Best Regards,
> Silvia.
>
>
>
> [1] http://www.w3.org/2008/WebVideo/Fragments/code/plugin/ (expect some bugs)
> [2] http://www.w3.org/TR/media-frags/
>


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-07-01 Thread Silvia Pfeiffer
Hi Jonas,

On Thu, Jul 1, 2010 at 4:41 AM, Jonas Sicking  wrote:
> Hi Silvia,
>
> Back in may last year I brought [1] up the fact that there are two use
> cases for temporal media fragments:
>
> 1. Skipping to a particular point in a longer resource, such as
> wanting to start a video at a particular point while still allowing
> seeking in the entire resource. This is currently supported by for
> example YouTube [2]. It is also the model used for web pages where
> including a fragment identifier only scrolls to a particular point,
> while allowing the user to scroll to any point both before and after
> the identified fragment.
>
> 2. Only displaying part of a video. For example out of a longer video
> from a discussion panel, only displaying the part where a specific
> topic is discussed.
>
> While there seemed to be agreement [3][4] that these are in fact two
> separate use cases, it seems like the media fragments draft is only
> attempting to address one. Additionally, it only addresses the one
> that has the least precedence as far as current technologies on the
> web goes.
>
> Was this an intentional omission? Is it planned to solve use case 1
> above in a future revision?
>
> [1] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019596.html
> [2] http://www.youtube.com/watch?v=fyQrKvc7_NU#t=201
> [3] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019718.html
> [4] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019721.html


I think you may have misunderstood the specification. Use case 1 is
indeed the main use case being addressed in the specification. There
is a Firefox plugin implementation[1] of the specification that shows
exactly use case 1 in a video element - a URI with a fragment such as
video.ogv#t=40,50 is being included in a  element and the
effect is that the video is displayed from 40s to 50s, but the
transport bar (or controls) are still those of the complete resource,
so you can still seek to any position.

To be sure, this is just a recommendation of how it is supposed to be
implemented (see
http://www.w3.org/TR/media-frags/#media-fragment-display). The group
only defined what URIs look like that point to such a segment - it
cannot prescribe what an application (such as a HTML document) does
with the URI. I would propose that this discussion should be had about
HTML5 and a sentence be added to the HTML5 spec on how UAs are
expected to deal with such segments.

Further, if you are indeed only interested in a subpart of the
original media resource and want to completely blend out all context
(i.e. all other bits of the media resource), you should be using the
URI query addressing method instead of the URI fragment, e.g.
video.ogv?t=40,50. This URI is supposed to create a new resource that
consist only of the segment - it will only do so, of course, if your
server supports this functionality.

All of this is described in more detail in the spec [2]. If that is
unclear or anything is confusing, please do point it out so it can be
fixed.

Best Regards,
Silvia.



[1] http://www.w3.org/2008/WebVideo/Fragments/code/plugin/ (expect some bugs)
[2] http://www.w3.org/TR/media-frags/


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-06-30 Thread Jonas Sicking
Hi Silvia,

Back in may last year I brought [1] up the fact that there are two use
cases for temporal media fragments:

1. Skipping to a particular point in a longer resource, such as
wanting to start a video at a particular point while still allowing
seeking in the entire resource. This is currently supported by for
example YouTube [2]. It is also the model used for web pages where
including a fragment identifier only scrolls to a particular point,
while allowing the user to scroll to any point both before and after
the identified fragment.

2. Only displaying part of a video. For example out of a longer video
from a discussion panel, only displaying the part where a specific
topic is discussed.

While there seemed to be agreement [3][4] that these are in fact two
separate use cases, it seems like the media fragments draft is only
attempting to address one. Additionally, it only addresses the one
that has the least precedence as far as current technologies on the
web goes.

Was this an intentional omission? Is it planned to solve use case 1
above in a future revision?

[1] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019596.html
[2] http://www.youtube.com/watch?v=fyQrKvc7_NU#t=201
[3] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019718.html
[4] http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019721.html

/ Jonas

On Wed, Jun 30, 2010 at 5:11 AM, Silvia Pfeiffer
 wrote:
> Hi all,
>
> The W3C WG for media fragments has published a Last Call Working Draft
> at http://www.w3.org/TR/media-frags/ .
>
> The idea of the spec is to enable addressing sub-parts of audio-visual
> resources through URIs, such as http://example.com/video.ogv?t=10,40
> to address seconds 10-40 out of video.ogv. This is relevant for use in
> the  and  elements and can help focus the playback to a
> specific subpart.
>
> This specification will provide "deep linking" as a standard
> specification for media resources.
>
> Incidentally, such functionality is also available at YouTube, see
> http://www.google.com/support/youtube/bin/answer.py?hl=en&answer=116618
> .
>
> "The Working Group encourages feedback about this document by
> developers and researchers who have interest in multimedia content
> addressing and retrieval on the web and by developers and researchers
> who have interest in Semantic Web technologies for content description
> and annotation. Please send comments about this document to
> public-media-fragm...@w3.org mailing list (public archive) by 27
> August 2010."
>
> Cheers,
> Silvia.
>


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-06-30 Thread Silvia Pfeiffer
On Thu, Jul 1, 2010 at 12:09 AM, Philip Jägenstedt  wrote:
> On Wed, 30 Jun 2010 14:11:44 +0200, Silvia Pfeiffer
>  wrote:
>
>> Hi all,
>>
>> The W3C WG for media fragments has published a Last Call Working Draft
>> at http://www.w3.org/TR/media-frags/ .
>>
>> The idea of the spec is to enable addressing sub-parts of audio-visual
>> resources through URIs, such as http://example.com/video.ogv?t=10,40
>> to address seconds 10-40 out of video.ogv. This is relevant for use in
>> the  and  elements and can help focus the playback to a
>> specific subpart.
>>
>> This specification will provide "deep linking" as a standard
>> specification for media resources.
>>
>> Incidentally, such functionality is also available at YouTube, see
>> http://www.google.com/support/youtube/bin/answer.py?hl=en&answer=116618
>> .
>>
>> "The Working Group encourages feedback about this document by
>> developers and researchers who have interest in multimedia content
>> addressing and retrieval on the web and by developers and researchers
>> who have interest in Semantic Web technologies for content description
>> and annotation. Please send comments about this document to
>> public-media-fragm...@w3.org mailing list (public archive) by 27
>> August 2010."
>>
>> Cheers,
>> Silvia.
>>
>
> I'd like to chime in here and encourage everybody to review the spec. I have
> been participating in the MF WG under the assumption that this is something
> we will couple with . From an implementor perspective, the main
> (blocking) issue with the spec is that it doesn't define how to parse a MF
> URI, so I hope other potential implementors and spec-junkies will pay some
> attention to this and comment as appropriate.
>
> P.S. a more relevant example for browsers would be  src="video.ogv#t=10,40">, as MF in the query component is strictly a
> server-side matter.
>
> --
> Philip Jägenstedt
> Core Developer
> Opera Software
>

Ah, bummer, apologies. That query slipped my hands - I meant to write
a URI fragment. Thanks for pointing it out, Philip.

Other examples are
spatial fragments:  or
track fragments:  or
named fragments: 

but the temporal fragments are the most important ones.

Cheers,
Silvia.


Re: [whatwg] media resources: addressing media fragments through URIs spec

2010-06-30 Thread Philip Jägenstedt
On Wed, 30 Jun 2010 14:11:44 +0200, Silvia Pfeiffer  
 wrote:



Hi all,

The W3C WG for media fragments has published a Last Call Working Draft
at http://www.w3.org/TR/media-frags/ .

The idea of the spec is to enable addressing sub-parts of audio-visual
resources through URIs, such as http://example.com/video.ogv?t=10,40
to address seconds 10-40 out of video.ogv. This is relevant for use in
the  and  elements and can help focus the playback to a
specific subpart.

This specification will provide "deep linking" as a standard
specification for media resources.

Incidentally, such functionality is also available at YouTube, see
http://www.google.com/support/youtube/bin/answer.py?hl=en&answer=116618
.

"The Working Group encourages feedback about this document by
developers and researchers who have interest in multimedia content
addressing and retrieval on the web and by developers and researchers
who have interest in Semantic Web technologies for content description
and annotation. Please send comments about this document to
public-media-fragm...@w3.org mailing list (public archive) by 27
August 2010."

Cheers,
Silvia.



I'd like to chime in here and encourage everybody to review the spec. I  
have been participating in the MF WG under the assumption that this is  
something we will couple with . From an implementor perspective,  
the main (blocking) issue with the spec is that it doesn't define how to  
parse a MF URI, so I hope other potential implementors and spec-junkies  
will pay some attention to this and comment as appropriate.


P.S. a more relevant example for browsers would be src="video.ogv#t=10,40">, as MF in the query component is strictly a  
server-side matter.


--
Philip Jägenstedt
Core Developer
Opera Software