[Wikitech-l] w...@home Extension

2009-07-31 Thread Michael Dale
Want to point out the working prototype of the w...@home extension. 
Presently it focuses on a system for transcoding uploaded media to free 
formats, but will also be used for "flattening sequences" and maybe 
other things in the future ;)

Its still rough around the edges ... it presently features:
* Support for uploading a non-free media assets,

* putting those non free media assets into a jobs table and distributing 
the transcode job into $wgChunkDuration length encoding jobs. ( each 
pieces is uploaded then reassembled on the server. that way big 
transcoding jobs can be distributed to as many clients that are 
participating )

* It supports multiple derivatives for different resolutions based on 
the requested size.
** In the future I will add a hook for oggHanlder to use that as well .. 
since a big usability issue right now is users embedding HD or high res 
ogg videos into a small video space in an article ... and it naturally 
it performs slowly.

* It also features a JavaScript interface for clients to query for new 
jobs, get the job, download the asset, do transcode & upload it (all 
through an api module so people could build a client as a shell script 
if they wanted)
** In the future the interface will support preferences , basic 
statistics and more options like "turn on w...@home every-time I visit 
wikipedia" or only get jobs while I am away from my computer.

* I try and handle derivatives consistently with the "file"/ media 
handling system. So right now your uploaded non-free format file will be 
linked to on the file detail page and via the api calls. We should 
probably limit client exposure to non-free formats. Obviously they have 
the files be on a public url to be transcoded, but the interfaces for 
embedding and the stream detail page should link to the free format 
version at all times.

* I tie transcoded chunks to user ids this makes it easier to disable 
bad participants.
** I need to add an interface to delete derivatives if someone flags it 
as so.

* it supports $wgJobTimeOut for re-assigning jobs that don't get done in 
$wgJobTimeOut time.

This was hacked together over the past few days so its by no means 
production ready ... but should get there soon ;)  Feedback is welcome. 
Its in the svn at: /trunk/extensions/WikiAtHome/

peace,
michael


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-07-31 Thread Gregory Maxwell
On Fri, Jul 31, 2009 at 9:51 PM, Michael Dale wrote:
> the transcode job into $wgChunkDuration length encoding jobs. ( each
> pieces is uploaded then reassembled on the server. that way big
> transcoding jobs can be distributed to as many clients that are
> participating )

This pretty much breaks the 'instant' gratification you currently get on upload.


The segmenting is going to significant harm compression efficiency for
any inter-frame coded output format unless you perform a two pass
encode with the first past on the server to do keyframe location
detection.  Because the stream will restart at cut points.

> * I tie transcoded chunks to user ids this makes it easier to disable
> bad participants.

Tyler Durden will be sad.

But this means that only logged in users will participate, no?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-07-31 Thread Michael Dale
Gregory Maxwell wrote:
> On Fri, Jul 31, 2009 at 9:51 PM, Michael Dale wrote:
>   
>> the transcode job into $wgChunkDuration length encoding jobs. ( each
>> pieces is uploaded then reassembled on the server. that way big
>> transcoding jobs can be distributed to as many clients that are
>> participating )
>> 
>
> This pretty much breaks the 'instant' gratification you currently get on 
> upload.
>   

true... people will never upload to site without instant gratification ( 
cough youtube cough ) ...

At any rate its not replacing the firefogg  that has instant 
gratification at point of upload its ~just another option~...

Also I should add that this w...@home system just gives us distributed 
transcoding as a bonus side effect ... its real purpose will be to 
distribute the flattening of edited sequences. So that 1) IE users can 
view them 2) We can use effects that for the time being are too 
computationally expensive to render out in real-time in javascript 3) 
you can download and play the sequences with normal video players and 4) 
we can transclude sequences and use templates with changes propagating 
to flattened versions rendered on the w...@home distributed computer

While presently many machines in the wikimedia internal server cluster 
grind away at parsing and rendering html from wiki-text the situation is 
many orders of magnitude more costly with using transclution and temples 
with video ... so its good to get this type of extension out in the wild 
and warmed up for the near future ;)


> The segmenting is going to significant harm compression efficiency for
> any inter-frame coded output format unless you perform a two pass
> encode with the first past on the server to do keyframe location
> detection.  Because the stream will restart at cut points.
>
>   

also true. Good thing theora-svn now supports two pass encoding :) ... 
but an extra key frame every 30 seconds properly wont hurt your 
compression efficiency too much.. vs the gain of having your hour long 
interview trans-code a hundred times faster than non-distributed 
conversion.  (almost instant gratification)  Once the cost of generating 
a derivative is on par with the cost of sending out the clip a few times 
for "viewing" lots of things become possible.
>> * I tie transcoded chunks to user ids this makes it easier to disable
>> bad participants.
>> 
>
> Tyler Durden will be sad.
>
> But this means that only logged in users will participate, no?
>   

true...  You also have to log in to upload to commons  It will make 
life easier and make abuse of the system more difficult.. plus it can 
act as a motivation factor with distribu...@home teams, personal stats 
and all that jazz. Just as people like to have their name show up on the 
"donate" wall when making small financial contributions.

peace,
--michael

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-07-31 Thread Gregory Maxwell
On Sat, Aug 1, 2009 at 12:13 AM, Michael Dale wrote:
> true... people will never upload to site without instant gratification (
> cough youtube cough ) ...

Hm? I just tried uploading to youtube and there was a video up right
away. Other sizes followed within a minute or two.

> At any rate its not replacing the firefogg  that has instant
> gratification at point of upload its ~just another option~...

As another option— Okay. But video support on the site stinks because
of lack of server side 'thumbnailing' for video.  People upload
multi-megabit videos, which is a good thing for editing, but then they
don't play well for most users.

Just doing it locally is hard— we've had failed SOC projects for this—
doing it distributed has all the local complexity and then some.

> Also I should add that this w...@home system just gives us distributed
> transcoding as a bonus side effect ... its real purpose will be to
> distribute the flattening of edited sequences. So that 1) IE users can
> view them 2) We can use effects that for the time being are too
> computationally expensive to render out in real-time in javascript 3)
> you can download and play the sequences with normal video players and 4)
> we can transclude sequences and use templates with changes propagating
> to flattened versions rendered on the w...@home distributed computer

I'm confused as to why this isn't being done locally at Wikimedia.
Creating some whole distributed thing seems to be trading off
something inexpensive (machine cycles) for something there is less
supply of— skilled developer time.  Processing power is really
inexpensive.

Some old copy of ffmpeg2theora on a single core of my core2 desktop
process a 352x288 input video at around 100mbit/sec (input video
consumption rate). Surely the time and cost required to send a bunch
of source material to remote hosts is going to offset whatever benefit
this offers.

We're also creating a whole additional layer of cost in that someone
have to police the results.

Perhaps my tyler durden reference was too indirect:

* Create a new account
* splice some penises 30 minutes into some talking head video
* extreme lulz.

Tracking down these instance and blocking these users seems like it
would be a fulltime job for a couple of people and it would only be
made worse if the naughtyness could be targeted at particular
resolutions or fallbacks. (Making it less likely that clueful people
will see the vandalism)


> While presently many machines in the wikimedia internal server cluster
> grind away at parsing and rendering html from wiki-text the situation is
> many orders of magnitude more costly with using transclution and temples
> with video ... so its good to get this type of extension out in the wild
> and warmed up for the near future ;)

In terms of work per byte of input the wikitext parser is thousands of
times slower than the theora encoder. Go go inefficient software. As a
result the difference may be less than many would assume.

Once you factor in the ratio of video to non-video content for the
for-seeable future this comes off looking like a time wasting
boondoggle.

Unless the basic functionality— like downsampled videos that people
can actually play— is created I can't see there ever being a time
where some great distributed thing will do any good at all.

>> The segmenting is going to significant harm compression efficiency for
>> any inter-frame coded output format unless you perform a two pass
>> encode with the first past on the server to do keyframe location
>> detection.  Because the stream will restart at cut points.
>
> also true. Good thing theora-svn now supports two pass encoding :) ...

Yea, great, except doing the first pass for segmentation is pretty
similar to the computational cost as simply doing a one-pass encode of
the video.

> but an extra key frame every 30 seconds properly wont hurt your
> compression efficiency too much..

It's not just about keyframes locations— if you encode separately and
then merge you lose the ability to provide continuous rate control. So
there would be large bitrate spikes at the splice intervals which will
stall streaming for anyone without significantly more bandwidth than
the clip.

> vs the gain of having your hour long
> interview trans-code a hundred times faster than non-distributed
> conversion.  (almost instant gratification)

Well tuned you can expect a distributed system to improve throughput
at the expense of latency.

Sending out source material to a bunch of places, having them crunch
on it on whatever slow hardware they have, then sending it back may
win on the dollars per throughput front, but I can't see that having
good latency.

> true...  You also have to log in to upload to commons  It will make
> life easier and make abuse of the system more difficult.. plus it can

Having to create an account does pretty much nothing to discourage
malicious activity.

> act as a motivation factor with distribu...@home teams, perso

Re: [Wikitech-l] w...@home Extension

2009-07-31 Thread Brian
On Sat, Aug 1, 2009 at 12:47 AM, Gregory Maxwell  wrote:

> On Sat, Aug 1, 2009 at 12:13 AM, Michael Dale wrote:
>
>
> Once you factor in the ratio of video to non-video content for the
> for-seeable future this comes off looking like a time wasting
> boondoggle.
>

I think you vastly underestimate the amount of video that will be uploaded.
Michael is right in thinking big and thinking distributed. CPU cycles are
not *that* cheap. There is a lot of free video out there and as soon as we
have a stable system in place wikimedians are going to have a heyday
uploading it to Commons.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Gregory Maxwell
On Sat, Aug 1, 2009 at 2:54 AM, Brian wrote:
> On Sat, Aug 1, 2009 at 12:47 AM, Gregory Maxwell  wrote:
>> On Sat, Aug 1, 2009 at 12:13 AM, Michael Dale wrote:
>> Once you factor in the ratio of video to non-video content for the
>> for-seeable future this comes off looking like a time wasting
>> boondoggle.
> I think you vastly underestimate the amount of video that will be uploaded.
> Michael is right in thinking big and thinking distributed. CPU cycles are
> not *that* cheap.

Really rough back of the napkin numbers:

My desktop has a X3360 CPU. You can build systems all day using this
processor for $600 (I think I spent $500 on it 6 months ago).  There
are processors with better price/performance available now, but I can
benchmark on this.

Commons is getting roughly 172076 uploads per month now across all
media types.  Scans of single pages, photographs copied from flickr,
audio pronouncations, videos, etc.

If everyone switched to uploading 15 minute long SD videos instead of
other things there would be 154,868,400 seconds of video uploaded to
commons per-month. Truly a staggering amount. Assuming a 40 hour work
week it would take over 250 people working full time just to *view*
all of it.

That number is an average rate of 58.9 seconds of video uploaded per
second every second of the month.

Using all four cores my desktop video encodes at >16x real-time (for
moderate motion standard def input using the latest theora 1.1 svn).

So you'd need less than four of those systems to keep up with the
entire commons upload rate switched to 15 minute videos.  Okay, it
would be slow at peak hours and you might wish to produce a couple of
versions at different resolutions, so multiply that by a couple.

This is what I meant by processing being cheap.

If the uploads were all compressed at a bitrate of 4mbit/sec and that
users were kind enough to spread their uploads out through the day and
that the distributed system were perfectly efficient (only need to
send one copy of the upload out), and if Wikimedia were only paying
$10/mbit/sec/month for transit out of their primary dataceter... we'd
find that the bandwidth costs of sending that source material out
again would be $2356/month. (58.9 seconds per second * 4mbit/sec *
$10/mbit/sec/month)

(Since transit billing is on the 95th percentile 5 minute average of
the greater of inbound or outbound uploads are basically free, but
sending out data to the 'cloud' costs like anything else).

So under these assumptions sending out compressed video for
re-encoding is likely to cost roughly as much *each month* as the
hardware for local transcoding. ... and the pace of processing speed
up seems to be significantly better than the declining prices for
bandwidth.

This is also what I meant by processing being cheap.

Because uploads won't be uniformly space you'll need some extra
resources to keep things from getting bogged at peak hours. But the
poor peak-to-average ratio also works against the bandwidth costs. You
can't win: Unless you assume that uploads are going to be very low
bitrates local transcoding will always be cheaper with very short
payoff times.

I don't know how to figure out how much it would 'cost' to have human
contributors spot embedded penises snuck into transcodes and then
figure out which of several contributing transcoders are doing it and
blocking them, only to have the bad user switch IPs and begin again.
... but it seems impossibly expensive even though it's not an actual
dollar cost.


> There is a lot of free video out there and as soon as we
> have a stable system in place wikimedians are going to have a heyday
> uploading it to Commons.

I'm not saying that there won't be video; I'm saying there won't be
video if development time is spent on fanciful features rather than
desperately needed short term functionality.  We have tens of
thousands of videos, much of which don't stream well for most people
because they need thumbnailing.

Firefogg was useful upload lubrication. But user-powered cloud
transcoding?  I believe the analysis I provided above demonstrates
that resources would be better applied elsewhere.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread David Gerard
2009/8/1 Brian :

> I think you vastly underestimate the amount of video that will be uploaded.
> Michael is right in thinking big and thinking distributed. CPU cycles are
> not *that* cheap. There is a lot of free video out there and as soon as we
> have a stable system in place wikimedians are going to have a heyday
> uploading it to Commons.


Oh hell yes. If I could just upload any AVI or MPEG4 straight off a
camera, you bet I would. Just imagine what people who've never heard
the word "Theora" will do.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Kat Walsh
On Sat, Aug 1, 2009 at 9:57 AM, David Gerard wrote:
> 2009/8/1 Brian :
>
>> I think you vastly underestimate the amount of video that will be uploaded.
>> Michael is right in thinking big and thinking distributed. CPU cycles are
>> not *that* cheap. There is a lot of free video out there and as soon as we
>> have a stable system in place wikimedians are going to have a heyday
>> uploading it to Commons.
>
>
> Oh hell yes. If I could just upload any AVI or MPEG4 straight off a
> camera, you bet I would. Just imagine what people who've never heard
> the word "Theora" will do.

Even if so, I don't think assuming that every single commons upload at
the current rate will instead be a 15-minute video is much of an
underestimate...

-Kat


-- 
Your donations keep Wikipedia online: http://donate.wikimedia.org/en
Wikimedia, Press: k...@wikimedia.org * Personal: k...@mindspillage.org
http://en.wikipedia.org/wiki/User:Mindspillage * (G)AIM:Mindspillage
mindspillage or mind|wandering on irc.freenode.net * email for phone

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 10:12 AM, Kat Walsh  wrote:

> On Sat, Aug 1, 2009 at 9:57 AM, David Gerard wrote:
> > 2009/8/1 Brian :
> >
> >> I think you vastly underestimate the amount of video that will be
> uploaded.
> >> Michael is right in thinking big and thinking distributed. CPU cycles
> are
> >> not *that* cheap. There is a lot of free video out there and as soon as
> we
> >> have a stable system in place wikimedians are going to have a heyday
> >> uploading it to Commons.
> >
> >
> > Oh hell yes. If I could just upload any AVI or MPEG4 straight off a
> > camera, you bet I would. Just imagine what people who've never heard
> > the word "Theora" will do.
>
> Even if so, I don't think assuming that every single commons upload at
> the current rate will instead be a 15-minute video is much of an
> underestimate...
>
> -Kat
>

A reasonable estimate would require knowledge of how much free video can be
automatically acquired, it's metadata automatically parsed and then
automatically uploaded to commons. I am aware of some massive archives of
free content video. Current estimates based on images do not necessarily
apply to video, especially as we are just entering a video-aware era of the
internet. At any rate, while Gerard's estimate is a bit optimistic in my
view, it seems realistic for the near term.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 10:17 AM, Brian  wrote:

>
>
> On Sat, Aug 1, 2009 at 10:12 AM, Kat Walsh  wrote:
>
>> On Sat, Aug 1, 2009 at 9:57 AM, David Gerard wrote:
>> > 2009/8/1 Brian :
>> >
>> >> I think you vastly underestimate the amount of video that will be
>> uploaded.
>> >> Michael is right in thinking big and thinking distributed. CPU cycles
>> are
>> >> not *that* cheap. There is a lot of free video out there and as soon as
>> we
>> >> have a stable system in place wikimedians are going to have a heyday
>> >> uploading it to Commons.
>> >
>> >
>> > Oh hell yes. If I could just upload any AVI or MPEG4 straight off a
>> > camera, you bet I would. Just imagine what people who've never heard
>> > the word "Theora" will do.
>>
>> Even if so, I don't think assuming that every single commons upload at
>> the current rate will instead be a 15-minute video is much of an
>> underestimate...
>>
>> -Kat
>>
>
> A reasonable estimate would require knowledge of how much free video can be
> automatically acquired, it's metadata automatically parsed and then
> automatically uploaded to commons. I am aware of some massive archives of
> free content video. Current estimates based on images do not necessarily
> apply to video, especially as we are just entering a video-aware era of the
> internet. At any rate, while Gerard's estimate is a bit optimistic in my
> view, it seems realistic for the near term.
>

Sorry, looked up to the wrong message - Gregory's estimate.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Gregory Maxwell
On Sat, Aug 1, 2009 at 12:17 PM, Brian wrote:
> A reasonable estimate would require knowledge of how much free video can be
> automatically acquired, it's metadata automatically parsed and then
> automatically uploaded to commons. I am aware of some massive archives of
> free content video. Current estimates based on images do not necessarily
> apply to video, especially as we are just entering a video-aware era of the
> internet. At any rate, while Gerard's estimate is a bit optimistic in my
> view, it seems realistic for the near term.

So—  The plan is that we'll lose money on every transaction but we'll
make it up in volume?

(Again, this time without math: The rate of increase as a function of
video-minutes of the amortized hardware costs costs for local
transcoding is lower than the rate of increase in bandwidth costs
needed to send off the source material to users to transcode in a
distributed manner. This holds for pretty much any reasonable source
bitrate, though I used 4mbit/sec in my calculaton.  So regardless of
the amount of video being uploaded using users is simply more
expensive than doing it locally)

Existing distributed computing projects work because the ratio of
CPU-crunching to communicating is enormously high. This isn't (and
shouldn't be) true for video transcoding.

They also work because there is little reward for tampering with the
system. I don't think this is true for our transcoding. There are many
who would be greatly gratified by splicing penises into streams far
more so than anonymously and undetectably making a protein fold wrong.

... and it's only reasonable to expect the cost gap to widen.

On Sat, Aug 1, 2009 at 9:57 AM, David Gerard wrote:
> Oh hell yes. If I could just upload any AVI or MPEG4 straight off a
> camera, you bet I would. Just imagine what people who've never heard
> the word "Theora" will do.

Sweet! Except, *instead* of developing the ability to upload straight
off a camera what is being developed is user-distributed video
transcoding— which won't do anything itself to make it easier to
upload.

What it will do is waste precious development cycles maintaining an
overly complicated software infrastructure, waste precious commons
administration cycles hunting subtle and confusing sources of
vandalism, and waste income from donors by spending more on additional
outbound bandwidth than would be spent on computing resources to
transcode locally.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 11:04 AM, Gregory Maxwell  wrote:

> On Sat, Aug 1, 2009 at 12:17 PM, Brian wrote:
> > A reasonable estimate would require knowledge of how much free video can
> be
> > automatically acquired, it's metadata automatically parsed and then
> > automatically uploaded to commons. I am aware of some massive archives of
> > free content video. Current estimates based on images do not necessarily
> > apply to video, especially as we are just entering a video-aware era of
> the
> > internet. At any rate, while Gerard's estimate is a bit optimistic in my
> > view, it seems realistic for the near term.
>
> So—  The plan is that we'll lose money on every transaction but we'll
> make it up in volume?


There are always tradeoffs. If I understand w...@home correctly it is also
intended to be run @foundation. It works just as well for distributing
transcoding over the foundation cluster as it does for distributing it to
disparate clients. Thus, if the foundation encounters a cpu backlog and
wishes to distribute some long running jobs to @home clients in order to
maintain realtime operation of the site in exchange for bandwidth it could.
Through this method the foundation could handle transcoding spikes of
arbitrary size. In the case of spikes @foundation can do first pass
get-something-back-to-the-user-now encoding and pass the rest of the tasks
to @home.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Gregory Maxwell
On Sat, Aug 1, 2009 at 1:13 PM, Brian wrote:
>
> There are always tradeoffs. If I understand w...@home correctly it is also
> intended to be run @foundation. It works just as well for distributing
> transcoding over the foundation cluster as it does for distributing it to
> disparate clients.

There is nothing in the source code that suggests that.

It currently requires the compute nodes to be running the firefogg
browser extension.  So this would require loading an xserver and
firefox onto the servers in order to have them participate as it is
now.  The video data has to take a round-trip through PHP and the
upload interface which doesn't really make any sense, that alone could
well take as much time as the actual transcode.

As a server distribution infrastructure it would be an inefficient one.

Much of the code in the extension appears to be there to handle issues
that simply wouldn't exist in the local transcoding case.   I would
have no objection to a transcoding system designed for local operation
with some consideration made for adding externally distributed
operation in the future if it ever made sense.

Incidentally— The slice and recombine approach using oggCat in
WikiAtHome produces files with gaps in the granpos numbering and audio
desync for me.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 11:47 AM, Gregory Maxwell  wrote:

> As a server distribution infrastructure [snip]
>

It had occured to me that w...@home might be better generalized to an
heterogeneous compute cloud for foundation trusted code. The idea would be
qemu sandboxes distributed via boinc. So the foundation could distribute
transcoder sandboxes to a certain number of clients, and sandboxes specific
to the needs of researchers using datasets such as the dumps which are often
easily parellelized using map/reduce. The head node would sit on the tool
server. The qemu instances would run ubuntu. The researcher submits a job,
which consists of a directory containing his code, his data, and a file
describing the map/reduce partitioning of the data. The head node compiles
the code into a qemu instance and uses boinc to map it to a client that is
running win/linux/mac.  Crazy, right? ;-)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 12:05 PM, Brian  wrote:

> On Sat, Aug 1, 2009 at 11:47 AM, Gregory Maxwell wrote:
>
>> As a server distribution infrastructure [snip]
>>
>
> It had occured to me that w...@home might be better generalized to an
> heterogeneous compute cloud for foundation trusted code. The idea would be
> qemu sandboxes distributed via boinc. So the foundation could distribute
> transcoder sandboxes to a certain number of clients, and sandboxes specific
> to the needs of researchers using datasets such as the dumps which are often
> easily parellelized using map/reduce. The head node would sit on the tool
> server. The qemu instances would run ubuntu. The researcher submits a job,
> which consists of a directory containing his code, his data, and a file
> describing the map/reduce partitioning of the data. The head node compiles
> the code into a qemu instance and uses boinc to map it to a client that is
> running win/linux/mac.  Crazy, right? ;-)
>

Various obvious effeciency improvements occured to me. If the clients are
already running an Ubuntu qemu instance then they can simply be shipped the
code and the data. They compile the code and run their portion of the data.
The transcoder clients sit idle with a transcoder instance ready, process
the data and send it back. Obviously, it is not very optimal to ship out an
entire os for every job..:)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 12:18 PM, Brian  wrote:

> On Sat, Aug 1, 2009 at 12:05 PM, Brian  wrote:
>
>> On Sat, Aug 1, 2009 at 11:47 AM, Gregory Maxwell wrote:
>>
>>> As a server distribution infrastructure [snip]
>>>
>>
>> It had occured to me that w...@home might be better generalized to an
>> heterogeneous compute cloud for foundation trusted code. The idea would be
>> qemu sandboxes distributed via boinc. So the foundation could distribute
>> transcoder sandboxes to a certain number of clients, and sandboxes specific
>> to the needs of researchers using datasets such as the dumps which are often
>> easily parellelized using map/reduce. The head node would sit on the tool
>> server. The qemu instances would run ubuntu. The researcher submits a job,
>> which consists of a directory containing his code, his data, and a file
>> describing the map/reduce partitioning of the data. The head node compiles
>> the code into a qemu instance and uses boinc to map it to a client that is
>> running win/linux/mac.  Crazy, right? ;-)
>>
>
> Various obvious effeciency improvements occured to me. If the clients are
> already running an Ubuntu qemu instance then they can simply be shipped the
> code and the data. They compile the code and run their portion of the data.
> The transcoder clients sit idle with a transcoder instance ready, process
> the data and send it back. Obviously, it is not very optimal to ship out an
> entire os for every job..:)
>

And of course, you can just ship them the binaries!
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Michael Dale
Some notes:
* ~its mostly an api~. We can run it internally if that is more cost 
efficient. ( will do on a command line client shortly ) ... (as 
mentioned earlier the present code was hacked together quickly its just 
a prototype. I will generalize things to work better as internal jobs. 
and I think I will not create File:Myvideo.mp4 wiki pages rather create 
a placeholder File:Myvideo.ogg page and only store the derivatives 
outside of wiki page node system. I also notice some sync issues with 
oggCat which are under investigation )

* Clearly CPU's, are cheep so is power for the commuters, human 
resources for system maintenance, rack-space and internal network 
management, and we of-course will want to "run the numbers" on any 
solution we go with. I think your source bitrate assumption was a little 
high I would think more like 1-2Mbs (with cell-phone camaras targeting 
low bitrates for transport and desktops re-encoding before upload). But 
I think this whole convesation is missing the larget issue which is if 
its cost prohibitive to distribute a few copies for transcode how are we 
going to distribute the derivatives thousands of times for viewing?  
Perhaps future work in this area should focus more on the distributing  
bandwith cost issue.

*  Furthermore I think I might have mis-represented w...@home I should 
have more clearly focused on the sequence flattening and only mentioned 
transocding as an option. With sequence flattening we have a more 
standard viewing bitrate of source material and cpu costs for rendering 
are much higher. At present there is no fast way to overlay html/svg on 
video with filters and effects that are only presently predictably 
defined in javascript. For this reason we use the browser to wysiwyg 
render out the content. Eventually we may want to write a optimized 
stand alone flattener, but for now the w...@home solution worlds less 
costly in terms of developer resources since we can use the "editor" to 
output the flat file.

3) And finally yes ... you can already insert a penis into video uploads 
today. With something like: oggCat | "ffmpeg2theora -i someVideo.ogg -s 
0 -e 42.2" "myOneFramePenis.ogg"  "ffmpeg2theora -i someVideo.ogg -s 42.2"
But yea its one more level to worry about and if its cheaper to do it 
internally (the transcodes not the penis insertion) we should do it 
internally. :P  (I hope other appreciate the multiple levels of humor here)

peace,
michael

Gregory Maxwell wrote:
> On Sat, Aug 1, 2009 at 2:54 AM, Brian wrote:
>   
>> On Sat, Aug 1, 2009 at 12:47 AM, Gregory Maxwell  wrote:
>> 
>>> On Sat, Aug 1, 2009 at 12:13 AM, Michael Dale wrote:
>>> Once you factor in the ratio of video to non-video content for the
>>> for-seeable future this comes off looking like a time wasting
>>> boondoggle.
>>>   
>> I think you vastly underestimate the amount of video that will be uploaded.
>> Michael is right in thinking big and thinking distributed. CPU cycles are
>> not *that* cheap.
>> 
>
> Really rough back of the napkin numbers:
>
> My desktop has a X3360 CPU. You can build systems all day using this
> processor for $600 (I think I spent $500 on it 6 months ago).  There
> are processors with better price/performance available now, but I can
> benchmark on this.
>
> Commons is getting roughly 172076 uploads per month now across all
> media types.  Scans of single pages, photographs copied from flickr,
> audio pronouncations, videos, etc.
>
> If everyone switched to uploading 15 minute long SD videos instead of
> other things there would be 154,868,400 seconds of video uploaded to
> commons per-month. Truly a staggering amount. Assuming a 40 hour work
> week it would take over 250 people working full time just to *view*
> all of it.
>
> That number is an average rate of 58.9 seconds of video uploaded per
> second every second of the month.
>
> Using all four cores my desktop video encodes at >16x real-time (for
> moderate motion standard def input using the latest theora 1.1 svn).
>
> So you'd need less than four of those systems to keep up with the
> entire commons upload rate switched to 15 minute videos.  Okay, it
> would be slow at peak hours and you might wish to produce a couple of
> versions at different resolutions, so multiply that by a couple.
>
> This is what I meant by processing being cheap.
>
> If the uploads were all compressed at a bitrate of 4mbit/sec and that
> users were kind enough to spread their uploads out through the day and
> that the distributed system were perfectly efficient (only need to
> send one copy of the upload out), and if Wikimedia were only paying
> $10/mbit/sec/month for transit out of their primary dataceter... we'd
> find that the bandwidth costs of sending that source material out
> again would be $2356/month. (58.9 seconds per second * 4mbit/sec *
> $10/mbit/sec/month)
>
> (Since transit billing is on the 95th percentile 5 minute average of
> the greater of inbound or outbound uploads are basica

Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread David Gerard
2009/8/1 Brian :

> And of course, you can just ship them the binaries!


Trusted clients are impossible. Particularly for prrotecting against
lulz-seekers.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 1:07 PM, David Gerard  wrote:

> 2009/8/1 Brian :
>
> > And of course, you can just ship them the binaries!
>
>
> Trusted clients are impossible. Particularly for prrotecting against
> lulz-seekers.
>
>
> - d.
>
>
Impossible? That's hyperbole.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread David Gerard
2009/8/1 Brian :
> On Sat, Aug 1, 2009 at 1:07 PM, David Gerard  wrote:
>> 2009/8/1 Brian :

>> > And of course, you can just ship them the binaries!

>> Trusted clients are impossible. Particularly for prrotecting against
>> lulz-seekers.

> Impossible? That's hyperbole.


No, it's mathematically accurate. There is NO SUCH THING as a trusted
client. It's the same problem as DRM and security by obscurity.

http://en.wikipedia.org/wiki/Trusted_client
http://en.wikipedia.org/wiki/Security_by_obscurity

Never trust the client. Ever, ever, ever. If you have a working model
that relies on a trusted client you're fucked already.

Basically, if you want to distribute binaries to reduce hackability
... it won't work and you might as well be distributing source.
Security by obscurity just isn't.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Brian
On Sat, Aug 1, 2009 at 1:32 PM, David Gerard  wrote:

> 2009/8/1 Brian :
> > On Sat, Aug 1, 2009 at 1:07 PM, David Gerard  wrote:
> >> 2009/8/1 Brian :
>
> >> > And of course, you can just ship them the binaries!
>
> >> Trusted clients are impossible. Particularly for prrotecting against
> >> lulz-seekers.
>
> > Impossible? That's hyperbole.
>
>
> No, it's mathematically accurate. There is NO SUCH THING as a trusted
> client. It's the same problem as DRM and security by obscurity.
>
> http://en.wikipedia.org/wiki/Trusted_client
> http://en.wikipedia.org/wiki/Security_by_obscurity
>
> Never trust the client. Ever, ever, ever. If you have a working model
> that relies on a trusted client you're fucked already.
>
> Basically, if you want to distribute binaries to reduce hackability
> ... it won't work and you might as well be distributing source.
> Security by obscurity just isn't.
>
>
> - d.
>

Ok, nice rant. But nobody cares if you scramble their scientific data before
sending it back to the server. They will notice the statistical blip and ban
you.

I don't think in terms of impossible. It impedes progress.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Marco Schuster
On Sat, Aug 1, 2009 at 9:35 PM, Brian  wrote:

> > Never trust the client. Ever, ever, ever. If you have a working model
> > that relies on a trusted client you're fucked already.
> >
> > Basically, if you want to distribute binaries to reduce hackability
> > ... it won't work and you might as well be distributing source.
> > Security by obscurity just isn't.
> >
> >
> > - d.
> >
>
> Ok, nice rant. But nobody cares if you scramble their scientific data
> before
> sending it back to the server. They will notice the statistical blip and
> ban
> you.
>
What about video files exploiting some new 0day exploit in a video input
format? The Wikimedia transcoding servers *must* be totally separated from
the other WM servers to prevent 0wnage or a site-wide hack.

About users who run encoding chunks - they have to get a full installation
of decoders and stuff, which also has to be kept up to date (and if the
clients run in different countries - there are patents and other legal stuff
to take care of!); also, the clients must be protected from getting infected
chunks so they do not get 0wned by content wikimedia gave to them (imagine
the press headlines)...

I'd actually be interested how YouTube and the other video hosters protect
themselves against hacker threats - did they code totally new de/en-coders?

Marco
-- 
VMSoft GbR
Nabburger Str. 15
81737 München
Geschäftsführer: Marco Schuster, Volker Hemmert
http://vmsoft-gbr.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Mike.lifeguard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

BTW, Who's idea was this extension? I know Michael Dale is writing it,
but was this something assigned to him by someone else? Was it discussed
beforehand? Or is this just Michael's project through and through?

Thanks,
- -Mike
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkp0zv4ACgkQst0AR/DaKHtFVACgyH8J835v8xDGMHL78D+pYrB7
NB8AoMZVwO7gzg9+IYIlZh2Zb3zGG07q
=tpEc
-END PGP SIGNATURE-

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Platonides
Marco Schuster wrote:
> What about video files exploiting some new 0day exploit in a video input
> format? The Wikimedia transcoding servers *must* be totally separated from
> the other WM servers to prevent 0wnage or a site-wide hack.

That's not different than a 0day on tiff or djvu.
You can do privilege separation, communicate using only pipes...


> About users who run encoding chunks - they have to get a full installation
> of decoders and stuff, which also has to be kept up to date (and if the
> clients run in different countries - there are patents and other legal stuff
> to take care of!); also, the clients must be protected from getting infected
> chunks so they do not get 0wned by content wikimedia gave to them (imagine
> the press headlines)...

The exploit affecting third party seems is imho a bigger concern than
affecting wmf servers. The servers can be protected better than the
users systems. And infecting your users is a Really Bad Thing (tm).

Regarding an up-to-date install, the task can include the minimum
version to run it, to avoid running tasks on outdated systems.
Although you can only do that if you provide the whole framework,
whereas for patents and licenes issues it would be preferable to let the
users get the codecs themselves.

> I'd actually be interested how YouTube and the other video hosters protect
> themselves against hacker threats - did they code totally new de/en-coders?

That would be even more risky than using existing, tested (de|en)coders.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-01 Thread Michael Dale

I had to program it anyway to support the distributing of the flattening 
of sequences. Which has been the planed approach for quite some time. I 
thought of the "name" and adding "one-off" support for transocoding 
recently, and hacked it up over the past few days.

This code will eventually support flattening of sequences. But adding 
code to do transcoding was a low hanging fruit feature and easy first 
step. We can now consider if its efficient to use the transcoding 
feature in wikimedia setup or not but I will use the code either way to 
support sequence flattening (which has to take place in the browser 
since there is no other easy way to guarantee wysiwyg flat 
representation of browser edited sequences )

peace,
--michael

Mike.lifeguard wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> BTW, Who's idea was this extension? I know Michael Dale is writing it,
> but was this something assigned to him by someone else? Was it discussed
> beforehand? Or is this just Michael's project through and through?
>
> Thanks,
> - -Mike
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAkp0zv4ACgkQst0AR/DaKHtFVACgyH8J835v8xDGMHL78D+pYrB7
> NB8AoMZVwO7gzg9+IYIlZh2Zb3zGG07q
> =tpEc
> -END PGP SIGNATURE-
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>   


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-02 Thread Marco Schuster
On Sun, Aug 2, 2009 at 2:32 AM, Platonides  wrote:

> > I'd actually be interested how YouTube and the other video hosters
> protect
> > themselves against hacker threats - did they code totally new
> de/en-coders?
>
> That would be even more risky than using existing, tested (de|en)coders.
>
Really? If they simply don't publish the source (and the binaries), then the
only possible way for an attacker is fuzzing... and that can take long time.

Marco

--
VMSoft GbR
Nabburger Str. 15
81737 München
Geschäftsführer: Marco Schuster, Volker Hemmert
http://vmsoft-gbr.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] w...@home Extension

2009-08-02 Thread David Gerard
2009/8/2 Marco Schuster :

> Really? If they simply don't publish the source (and the binaries), then the
> only possible way for an attacker is fuzzing... and that can take long time.


I believe they use ffmpeg, like everyone does. The ffmpeg code has had
people kicking it for quite a while. Transcoding as a given Unix user
with not many powers is reasonable isolation.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-03 Thread Brion Vibber
On 7/31/09 6:51 PM, Michael Dale wrote:
> Want to point out the working prototype of the w...@home extension.
> Presently it focuses on a system for transcoding uploaded media to free
> formats, but will also be used for "flattening sequences" and maybe
> other things in the future ;)

Client-side rendering does make sense to me when integrated into the 
upload and sequencer processes; you've got all the source data you need 
and local CPU time to kill while you're shuffling the bits around on the 
wire.

But I haven't yet seen any evidence that a distributed rendering network 
will ever be required for us, or that it would be worth the hassle of 
developing and maintaining it.


We're not YouTube, and don't intend to be; we don't accept everybody's 
random vacation videos, funny cat tricks, or rips from Cartoon 
Network... Between our licensing requirements and our limited scope -- 
educational and reference materials -- I think we can reasonably expect 
that our volume of video will always be *extremely* small compared to 
general video-sharing sites.

We don't actually *want* everyone's blurry cell-phone vacation videos of 
famous buildings (though we might want blurry cell-phone videos of 
*historical events*, as with the occasional bit of interesting news 
footage).

Shooting professional-quality video suitable for Wikimedia use is 
probably two orders of magnitude harder than shooting attractive, useful 
still photos. Even if we make major pushes on the video front, I don't 
think we'll ever have the kind of mass volume that would require a 
distributed encoding network.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-03 Thread Michael Dale
perhaps if people create a lot of voice overs & ~Kens burns~ effects on 
commons images with the occasional inter-spliced video clip with lots of 
back and forth editing... and we are constantly creating timely 
derivatives of these flattened sequences that ~may~ necessitate such a 
system.. because things will be updating all the time ...

... but anyway... yea for now will focus on flattening sequences...

Did a basic internal encoder committed in r54340... Could add some 
enhancements but lets spec out want we want ;)

Still need to clean up the File:myFile.mp4 situation. Probably store in 
a temp location write out a File:myFile.ogg placeholder then once 
transcoded swap it in?

Also will hack in adding derivatives to the job queue where oggHandler 
is embed in a wiki-article at a substantial lower resolution than the 
source version. Will have it send the high res version until the 
derivative is created then "purge" the pages to point to the new 
location. Will try and have the "download" link still point to the high 
res version. (we will only create one or two derivatives... also we 
should decide if we want an ultra low bitrate (200kbs or so version for 
people accessing Wikimedia on slow / developing country connections)

peace,
michael

Brion Vibber wrote:
> On 7/31/09 6:51 PM, Michael Dale wrote:
>   
>> Want to point out the working prototype of the w...@home extension.
>> Presently it focuses on a system for transcoding uploaded media to free
>> formats, but will also be used for "flattening sequences" and maybe
>> other things in the future ;)
>> 
>
> Client-side rendering does make sense to me when integrated into the 
> upload and sequencer processes; you've got all the source data you need 
> and local CPU time to kill while you're shuffling the bits around on the 
> wire.
>
> But I haven't yet seen any evidence that a distributed rendering network 
> will ever be required for us, or that it would be worth the hassle of 
> developing and maintaining it.
>
>
> We're not YouTube, and don't intend to be; we don't accept everybody's 
> random vacation videos, funny cat tricks, or rips from Cartoon 
> Network... Between our licensing requirements and our limited scope -- 
> educational and reference materials -- I think we can reasonably expect 
> that our volume of video will always be *extremely* small compared to 
> general video-sharing sites.
>
> We don't actually *want* everyone's blurry cell-phone vacation videos of 
> famous buildings (though we might want blurry cell-phone videos of 
> *historical events*, as with the occasional bit of interesting news 
> footage).
>
> Shooting professional-quality video suitable for Wikimedia use is 
> probably two orders of magnitude harder than shooting attractive, useful 
> still photos. Even if we make major pushes on the video front, I don't 
> think we'll ever have the kind of mass volume that would require a 
> distributed encoding network.
>
> -- brion
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>   


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] w...@home Extension

2009-08-03 Thread Gregory Maxwell
On Mon, Aug 3, 2009 at 10:56 PM, Michael Dale wrote:
> Also will hack in adding derivatives to the job queue where oggHandler
> is embed in a wiki-article at a substantial lower resolution than the
> source version. Will have it send the high res version until the
> derivative is created then "purge" the pages to point to the new
> location. Will try and have the "download" link still point to the high
> res version. (we will only create one or two derivatives... also we
> should decide if we want an ultra low bitrate (200kbs or so version for
> people accessing Wikimedia on slow / developing country connections)
[snip]


So I think there should generally be three versions, a 'very low rate'
suitable for streaming for people without excellent broadband, a high
rate suitable for streaming on good broadband, and a 'download' copy
at full resolution and very high rate.  (The download copy would be
the file uploaded by the user if they uploaded an Ogg)

As a matter of principle we should try to achieve both "very high
quality" and "works for as many people as possible". I don't think we
need to achieve both with one file, so the high and low rate files
could specialize in those areas.


The suitable for streaming versions should have a limited
instantaneous bitrate (non-infinite buf-delay). This sucks for quality
but it's needed if we want streams that don't stall, because video can
easily have >50:1 peak to average rates over fairly short time-spans.
(It's also part of the secret sauce that differentiates smoothly
working video from stuff that only works on uber-broadband).

Based on 'what other people do' I'd say the low should be in the
200kbit-300kbit/sec range.  Perhaps taking the high up to a megabit?

There are also a lot of very short videos on Wikipedia where the whole
thing could reasonably be buffered prior to playback.


Something I don't have an answer for is what resolutions to use. The
low should fit on mobile device screens. Normally I'd suggest setting
the size based on the content: Low motion detail oriented video should
get higher resolutions than high motion scenes without important
details. Doubling the number of derivatives in order to have a large
and small setting on a per article basis is probably not acceptable.
:(

For example— for this
(http://people.xiph.org/~greg/video/linux_conf_au_CELT_2.ogv) low
motion video 150kbit/sec results in perfectly acceptable quality at a
fairly high resolution,  while this
(http://people.xiph.org/~greg/video/crew_cif_150.ogv) high motion clip
looks like complete crap at 150kbit/sec even though it has 25% fewer
pixels. For that target rate rhe second clip is much more useful when
downsampled: http://people.xiph.org/~greg/video/crew_128_150.ogv  yet
if the first video were downsampled like that it would be totally
useless as you couldn't read any of the slides.   I have no clue how
to solve this.  I don't think the correct behavior could be
automatically detected and if we tried we'd just piss off the users.


As an aside— downsampled video needs some makeup sharpening like
downsampled stills will. I'll work on getting something in
ffmpeg2theora to do this.

There is also the option of decimating the frame-rate. Going from
30fps to 15fps can make a decent improvement for bitrate vs visual
quality but it can make some kinds of video look jerky. (Dropping the
frame rate would also be helpful for any CPU starved devices)


Something to think of when designing this is that it would be really
good to keep track of the encoder version and settings used to produce
each derivative, so that files can be regenerated when the preferred
settings change or the encoder is improved. It would also make it
possible to do quick one-pass transcodes for the rate controlled
streams and have the transcoders go back during idle time and produce
better two-pass encodes.

This brings me to an interesting point about instant gratification:
Ogg was intended from day one to be a streaming format. This has
pluses and minuses, but one thing we should take advantage of is that
it's completely valid and well supported by most software to start
playing a file *as soon* as the encoder has started writing it. (If
software can't handle this it also can't handle icecast streams).
This means that so long as the transcode process is at least realtime
the transcodes could be immediately available.   This would, however,
require that the derivative(s) be written to an accessible location.
(and you will likely have to arrange so that a content-length: is not
sent for the incomplete file).

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] w...@home Extension

2009-08-04 Thread Brion Vibber
On 8/3/09 9:56 PM, Gregory Maxwell wrote:
[snip]
> Based on 'what other people do' I'd say the low should be in the
> 200kbit-300kbit/sec range.  Perhaps taking the high up to a megabit?
>
> There are also a lot of very short videos on Wikipedia where the whole
> thing could reasonably be buffered prior to playback.
>
>
> Something I don't have an answer for is what resolutions to use. The
> low should fit on mobile device screens.

At the moment the defaults we're using for Firefogg uploads are 400px 
width (eg, 400x300 or 400x225 for the most common aspect rations) 
targeting a 400kbps bitrate. IMO at 400kbps at this size things don't 
look particularly good; I'd prefer a smaller size/bitrate for 'low' and 
higher size/bitrate for "medium" qual.


 From sources I'm googling up, looks like YouTube is using 320x240 for 
low-res, 480x360 h.264 @ 512kbps+128kbps audio for higher-qual, with 
720p h.264 @ 1024Kbps+232kbps audio available for some HD videos.

http://www.squidoo.com/youtubehd

These seem like pretty reasonable numbers to target; offhand I'm not 
sure the bitrates used for the low-res version but I think that's with 
older Flash codecs anyway so not as directly comparable.

Also, might we want different standard sizes for 4:3 vs 16:9 material?

Perhaps we should wrangle up some source material and run some test 
compressions to get a better idea what this'll look like in practice...

> Normally I'd suggest setting
> the size based on the content: Low motion detail oriented video should
> get higher resolutions than high motion scenes without important
> details. Doubling the number of derivatives in order to have a large
> and small setting on a per article basis is probably not acceptable.
> :(

Yeah, that's way tougher to deal with... Potentially we could allow some 
per-file tweaks of bitrates or something, but that might be a world of 
pain. :)

> As an aside— downsampled video needs some makeup sharpening like
> downsampled stills will. I'll work on getting something in
> ffmpeg2theora to do this.

Woohoo!

> There is also the option of decimating the frame-rate. Going from
> 30fps to 15fps can make a decent improvement for bitrate vs visual
> quality but it can make some kinds of video look jerky. (Dropping the
> frame rate would also be helpful for any CPU starved devices)

15fps looks like crap IMO, but yeah for low-bitrate it can help a lot. 
We may wish to consider that source material may have varying frame 
rates, most likely to be:

15fps - crappy low-res stuff found on internet :)
24fps / 23.98 fps - film-sourced
25fps - PAL non-interlaced
30fps / 29.97 fps - NTSC non-interlaced or many computer-generated vids
50fps - PAL interlaced or PAL-compat HD native
60fps / 59.93fps - NTSC interlaced or HD native

And of course those 50 and 60fps items might be encoded with or without 
interlacing. :)

Do we want to normalize everything to a standard rate, or maybe just cut 
50/60 to 25/30?

(This also loses motion data, but not as badly as decimation to 15fps!)

> This brings me to an interesting point about instant gratification:
> Ogg was intended from day one to be a streaming format. This has
> pluses and minuses, but one thing we should take advantage of is that
> it's completely valid and well supported by most software to start
> playing a file *as soon* as the encoder has started writing it. (If
> software can't handle this it also can't handle icecast streams).
> This means that so long as the transcode process is at least realtime
> the transcodes could be immediately available.   This would, however,
> require that the derivative(s) be written to an accessible location.
> (and you will likely have to arrange so that a content-length: is not
> sent for the incomplete file).

Ooooh, good points all. :D Tricky but not impossible to implement.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] w...@home Extension

2009-08-04 Thread Gregory Maxwell
On Tue, Aug 4, 2009 at 7:46 PM, Brion Vibber wrote:
[snip]
> These seem like pretty reasonable numbers to target; offhand I'm not
> sure the bitrates used for the low-res version but I think that's with
> older Flash codecs anyway so not as directly comparable.

I'd defiantly recommend using youtube numbers as a starting point...
Though I think there may be an argument for going lower on the low end
(because we think we care more about compatibility than even they do)
and higher on the high end (because for at least some of our material
decent quality will be important)


> Also, might we want different standard sizes for 4:3 vs 16:9 material?

Yes, probably.

> Perhaps we should wrangle up some source material and run some test
> compressions to get a better idea what this'll look like in practice...

http://media.xiph.org/  < There is something like 100gigs of lossless
test material. Though most of it has sufficiently ambiguous licensing
that we wouldn't want to toss it up on commons. Many of them are
atypically difficult to compress, and almost all are too short for
decent testing of buffering/rate control. But they're still useful.

> Yeah, that's way tougher to deal with... Potentially we could allow some
> per-file tweaks of bitrates or something, but that might be a world of
> pain. :)

Tweaks of resolution are more important than bitrate, in that some
content just needs higher resolutions... (and if we offer a bitrate
tweak we'll probably see everyone with a good net connection turning
it up and everyone with a poor net connection turning it down).

> 15fps looks like crap IMO, but yeah for low-bitrate it can help a lot.
> We may wish to consider that source material may have varying frame
> rates, most likely to be:

Low rate video sucks no matter how you cut it. We just get to pick how
it sucks.

Sadly the best choice depends on the content.

> 50fps - PAL interlaced or PAL-compat HD native
> 60fps / 59.93fps - NTSC interlaced or HD native
> And of course those 50 and 60fps items might be encoded with or without
> interlacing. :)
> Do we want to normalize everything to a standard rate, or maybe just cut
> 50/60 to 25/30?

So, interlacing is a non-issue on the output side: No interlacing
support in Theora. (Compression of interlaced video is an extra
special patent minefield;  We'd want to deinterlace anyways, to
unburden the client). The transcoding tools will deinterlace.

I'd recommend cutting 50/60 to 25/30, deinterlacing as required.
Usually 60fps content isn't really displayed as that on PCs anyways
because of the synchronization between frame update and video readout.

(Deinterlacing also brings up the question of: Do we want to use
computationally expensive motion-compensated deinterlacing.  I could
argue "Interlaced content will be rare so the increase would be
harmless and it looks a little better" or "it's not worth the enormous
CPU usage increase for a small quality boost on content which will be
rare")

Non-integer ratio rate conversions don't look good. Whatever we do I
think we should only do small integer ratios, i.e. 1:1, 2:1, 3:1, in
order to get the rate at or under 30fps.We certainly want to allow
low frame rates: They'd make a vastly superior replacement to the
enormous animated GIFs in articles, much smaller, easier to produce,
and higher quality.


There is some 50/60p content in the media collection I linked to, so
you see how that looks.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l