Re: [Bf-committers] Playback of 25% Proxies is slower than with 50%

2015-05-22 Thread Peter Schlaile
Hi Björn,

I haven't bisected my way through it yet, but I'm currently using
Blender 2.66 for editing,
since there have been a lot of changes that caused serious slow downs in
between (at least
in the 8-bit pipeline).

Regarding frameshift: do you use timecodes? If you don't do that, you
will get only roughly
exact seeking depending on the material used.

If you use timecodes, proxies are frame exact. If they are not, please
file a bug report.

Regards,
Peter

-- 

The Poems, all three hundred of them, may be summed up in one of their phrases:
"Let our thoughts be correct".
-- Confucius
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Optional gstreamer support for video display

2016-10-15 Thread Peter Schlaile
Hi Luya,

I'm afraid, that's not a feasible solution for blender.

To support robust frame exact seeking, we have to do some creative stuff
with the ffmpeg API, which is to my knowledge not possible with
gstreamer, I'm afraid.

Suggestion: why don't you build a stripped down version of ffmpeg for
Fedora with all patented codecs removed and use that instead?

A lot of projects depend on ffmpeg (and libavcodec), so: removing it all
together seems a bit extreme to me...

Kind regards,
Peter


> Hello developers,
> 
> As a maintainer of Blender 3D for Fedora project, the ffmpeg support has
> to be disabled due to the policies related to patent issues (see
> https://fedoraproject.org/wiki/Software_Patents) in the USA. The side
> effect is the video sequence editor cannot play any open source codecs
> such as webm and theora.
> 
> An alternative will be to include support for gstreamer
> (https://gstreamer.freedesktop.org/) because of its plugin flexibility
> i.e. installing one for say mp4 is much easier.
> 
> Discussion welcome.
> 


___
Bf-committers mailing list
Bf-committers@blender.org
https://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Inpaint node review

2012-07-15 Thread Peter Schlaile
Hi,

I wrote an inpaint node for the compositor.

Since we are directly before a release, I haven't commited it, but I'd
like people to review it.

In case, you don't know, inpainting does this:
http://en.wikipedia.org/wiki/Inpainting

It's use cases in blender are 

* wire removal
* green screen background reconstruction

the later is important to actually improve keying in blender. (see
below)

The node isn't tile based (for fundamental reasons), but very fast,
since it first builds a manhatten distance map and after that performs
color convolution only on the edges.

That's something, one should probably add also to the dilate node (in
step mode) to make it perform a lot better for dilate iterations greater
than 3.

It will bring it's computing time from O(n^3) down to O(n^2).
Take a look here for the details: 
http://ostermiller.org/dilate_and_erode.html )

My aim is implementing something like the IBK Keyer in Nuke
(http://www.youtube.com/watch?v=-GmMC0AYXJ4 ), since all current
solutions within Blender fail on hair details rather badly.

You can see first steps in this direction here, which already do some
nice improvements:
http://peter.schlaile.de/blender/inpaint/

(compare key_raw.png to inpaint_key.png )

The trick I use is the following:

If you consider the usual compositing equation

Composite = Background * (1-ALPHA) + Foreground * ALPHA

for the case, that Background is our GreenScreen and Foreground is the
Object, we want to seperate, you'll notice, that we more or less
successfully can pull an alpha-matte (using a color channel node), but
currently fail to subtract the GreenScreen Background properly from the
Foreground.

That's no surprise, since until now, the GreenScreen Background wasn't
actually known (we don't have any clean plates shot in Mango).

But: we can inpaint the surrounding greenscreen into the area *behind*
the semi-transparent regions and subtract that instead. 

The only thing missing in blender for that task, is said inpainting
node. And that's why I added it :)

So: please use my git repository at

http://gitorious.org/~schlaile/blenderprojects/blender-vse-schlaile

git checkout image-keyer

and tell me your findings.

If I should commit to trunk, please let me know. If team Mango can make
use of it, feel free to commit to tomato branch.

Cheers and good night,
Peter

P.S.: There are a lot more sophisticated solutions for inpainting, some
are convolution based, like my simple approach (which convolves the
known surrounding pixels with a weighted average into the unknown
region), some are a lot more advanced. For all practical reasons (the
ones noted above), my node should work fairly well. If you want to add
additional inpainting algos, feel free to add a type and activate
custom1 as a type parameter variable.

-- 
--
Peter Schlaile



___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Inpaint node review

2012-07-15 Thread Peter Schlaile
Hi Nate,

forgot a CMake-File entry and to add old compositor node file.

Please git pull and try again!

Cheers,
Peter

> Not currently building for me on 64bit Linux: 
> http://www.pasteall.org/33827


-- 
--
Peter Schlaile



___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] image keyer / inpaint node: mission accomplished!

2012-07-29 Thread Peter Schlaile
Hi,

two weeks ago, I wrote something about an inpaint node and why it is
necessary to implement something like the nuke IBK-Keyer using Blender
nodes.

Well:

Following the ideas mentioned in this blog post (you have to scroll
down):
http://www.vfxtalk.com/archive/index.php/t-16044.html

"IBK is a color difference keyer with a very simple basic algorithm. In
case of a green screen the math is g-(r*rw+b*bw), where rw is the red
weight and bw is the blue weight with a default value of 0.5 for both.
What makes it sophisticated (among other things) is the way it uses
another image to scale the result of the above mentioned equation.

Every keyer scales (normalizes) the result of it's basic algorithm so
that, on one end, you get 1 for the pixels that match the chosen screen
color, and 0, on the other end, for the pixels that contain no or little
of the primary color of the backing screen (this is afterward inverted
so you end up with black for the transparent parts of the image and
white for the opaque parts).

Keylight, for example, scales the result of it's basic algorithm (which
is g-(r*0.5+b*0.5), the same as IBK by default) by dividing it with the
the result of gc-(rc*0.5+bc*0.5), where rc,gc and gc are the red, green
and blue values of the chosen screen color. IBK does the same if you set
"pick" as the screen type and select the backing screen color. If you
set screen type to "C-green" or "C-blue" instead of using a single value
for normalizing the result of the basic equation (i.e. the unscaled
matte image), it processes a "control" image with the gc-(rc*rw+bc*bw)
formula pixel by pixel, and then divides the unscaled matte image with
the processed control image."

I managed to arrive here:

http://peter.schlaile.de/blender/inpaint/network_image_keyer1.jpg
http://peter.schlaile.de/blender/inpaint/network_image_keyer2.jpg

with virtually *no* tuning necessary(!). 
It doesn't even need a seperate despill step!

It's still not a silver bullet, but comes pretty darn close :)

The funniest part: the math behind it is *really* mindbogglingly simple
as you can see in the second node network (or the cited blog post
above).

Have fun!

Cheers
Peter

P.S.: Anyone wants to review my inpaint node or 
  should I just hit "svn commit"... ?




___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Remove Frameserver

2012-08-05 Thread Peter Schlaile
Hi Campbell,

> Its helpful to remove because we shouldn't give users options that are
> not useful/tested/ready-for-production...

> When you use software and get the impression that some parts are not
> maintained - they crash or just fail, it doesn’t inspire confidence
> you want when relying on software for important projects.

Hmm. That's probably true. But I think, analyzing *why* something isn't
used and how things can be done better is a lot more clever than
just deleting code.

Simply deleting the old plugin infrastructure without providing a new
one, caused already a lot of grief with old plugin developers.

The message to those developers was: well, we don't use it, so noone
ever will, let's delete it. Please go away, find yourself another project.

Not very nice!

> I had a look over the frameserver docs:
> http://wiki.blender.org/index.php/Dev:Source/Render/Frameserver

> ... but I'm still not convinced this is really worth keeping - its 8
> bit channels. no alpha, no compression,

Uhm, well, but you already noticed, that most video encoding engines
do actually only work in 8 bit, right? With no alpha. And compression
is the job of the encoding engine not the frame server.

> that it can be setup to work
> with scripts over a network is clever but not generally useful IMHO.

the frame server is build as a simple HTTP-Server and keeps a directory 
hierarchie and could as well serve out EXR-frames or PPMs with other 
options, or PNGs, whatever you like.

8-bit PPMs with no compression were chosen in the first place, since 
the main intend was to provide a general interface to arbitrary video 
encoding engines.

> This could be put in a similar category as "Compiling blender as a
> python module" - its nifty and maybe very useful in some cases, but
> I'd prefer to disable for regular builds (if not remove).

Nope. Frameserving is pretty much standard. In video editing apps (take
a look at VirtualDub).

It's sad, that we haven't brought the sequencer to the state, where it
is generally accepted as a video editor, but having some sort of
frame serving functionality is certainly a must, if we want to.

The reason why people seldomly report bugs in the sequencer is caused
by the simple fact, that not so many people use blender as a video
editor at all.

The only way to master a DVD without the frame server in current blender
is using a directory of still frames and do the encoding from the still
frames. (You can't use ffmpeg in one pass mode for that and: you most
probably won't use ffmpeg at all, if quality is of any concern :) .)
Do that on long time lines (say: 3 hours) and watch your hard disk explode.

The only reason, why I (as the author of the frameserver) haven't noticed
it crashing for a long time, was the simple fact, that I don't create
DVDs anymore.

But you will certainly agree, that DVD creation is somewhat important?
It wasn't just very important to me lately, that's why we got those
unnoticed breakages in the first place.

That said: it may be possible to add the same functionality using
the current python interface (which you do know a lot better than me).

I have no problems with the idea of moving the whole frame server
functionality into a python add-on (if that is possible at all), provided
that we *do* have such a functionality somewhere.

Regards
Peter

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Remove Frameserver

2012-08-06 Thread Peter Schlaile
Hi Sergey,

> I would say if frameserver (in his current state) is still helpful for
> some usages, let's keep it as is for at least a while. If it's buggy /
> unstable/ unusable i would rather either completely remove it or at 
> least IFDEF it.

AFAIK, it works, currently.

> Even if some tool could be useful for somebody, we should either deliver it
> in a useful way or not try to deliver it at all. 

The code worked and isn't that bad IMHO. The only thing, that went wrong,
was a small glitch that slipped in while upgrading the frameserver to
new rendering API.

The biggest problem was probably the fact, that I don't do a lot advertisement
for the VSE, so noone uses Blender as a video editor.

And since I stopped creating DVDs several years ago, the problem went
completely unnoticed for me.

> In other words it's about quality of tools, not quantity
> of them. 

I don't think, that we had serious quality issues with the frameserver.
At least, I haven't heard of any? AFAIK, we were talking of a small regression
that was unnoticed?

> Do not forget that blender is mainly 3d modeling/rendering
> application, not a video editor, so lack of some specific for video editor 
> tools doesn't seem to be problem here for me.

I hope you don't mind, if I disagree.

Blender tries to offer a complete pipeline from start to end in a professional
and fully integrated way for free. 

Why the end of the pipeline should be less important than the beginning
I don't really understand.

And, as I already tried to explain: if you want to render long videos for DVD
output, I'm pretty sure, you either don't use blender for that or you'll need to
use the frameserver...

> Another question is -- if frameserver would be decided to be kept, who's
> gonna to maintain it? Think neither me nor Campbell or Brecht would have time 
> to
> work on frameserver, but as Thomas mentioned we still do have reports about 
> that
> area.

Certainly true, but: no one assigned a frame server bug to me
(the one fixed, also wasn't assigned to me) and: I can't find any bugs in
the bug tracker regarding the frameserver.

> So, any volunteers?

Please assign bugs in the current code to me.

And: as I mentioned, if someone finds time to write a python add-on replacement,
please go ahead.

Regards
Peter

-- 
Peter Schlaile


___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] developer.blender.org open!

2013-11-16 Thread Peter Schlaile
Hi,

just wanted to note, that I very much appreciated the switch to
Phabricator and I'm adopting it for our internal project tracking within
our music store.
(A very big thank you to Sergey and Brecht for pointing me to the tool,
I never heard before and I find exceptionally great the more I learn
about it's inner workings!)

My solution for the nerdy naming issue was adding proper translation
support to Phabricator in my own tree using gettext.

That way, I can easily keep my phabricator version in sync with upstream
and still have a sensible naming scheme, everyone will understand.
(Besides the fact,
that I'm also totally abusing Phabricator for managing non technical
stuff and my users don't speak English, so I had to translate everything
to German anyways, but that's a different story).

If you want to take a look:

https://github.com/schlaile/phabricator
https://github.com/schlaile/libphutil
https://github.com/schlaile/arcanist

I try to push the gettext-stuff upstream, when I've completed my German
version of Phabricator as a full fletched proof of concept.

The smaller version of the same idea would be to add a simple
Blender/English translator class, that derives from
PhabricatorBaseEnglishTranslation and just does the blenderish renaming
and everything should be in place.

Regards,
Peter

-- 
Peter Schlaile 
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] ffmpeg library update

2011-04-24 Thread Peter Schlaile
Hi,

>  Ok, i've build the latest ffmpeg 0.6.90-rc0 with options i've got from
> debian sid package rules (with some additional flags to get static libs
> which would run on all platofrms -- the same flags were used for mesa
> and openal):
>
>./configure \
> --cc="gcc -Wl,--as-needed" \
> --extra-ldflags="-pthread -static-libgcc" \
> --prefix=/opt/ffmpeg \
> --enable-static \
> --enable-avfilter \

avfilter isn't used.

> --enable-vdpau \

VDPAU as well.

> --enable-bzlib \
> --enable-libgsm \
> --enable-libschroedinger \
> --enable-libspeex \
> --enable-libtheora \
> --enable-libvorbis \
> --enable-pthreads \
> --enable-zlib \
> --enable-libvpx \
> --disable-stripping \
> --enable-runtime-cpudetect  \
> --enable-vaapi \

VAAPI isn't used.

> --enable-libopenjpeg \

I'm not exactly sure, where libopenjpeg can help (Blender has internal 
support for JPEG2000 using libopenjpeg and R3D-decoding is also done 
seperately).

> --enable-libfaac \
> --enable-nonfree \
> --enable-gpl \
> --enable-postproc \
> --enable-x11grab \

no need for x11grab and postproc.

> --enable-libdirac \
> --enable-libmp3lame \
> --enable-librtmp \
> --enable-libx264 \
> --enable-libxvid \
> --enable-libopencore-amrnb \
> --enable-version3 \
> --enable-libopencore-amrwb \
> --enable-version3 \
> --enable-libdc1394

libdc1394 isn't used either (might come handy some day, but currently we 
don't have capture support).

Hope that helps!

Cheers,
Peter

> Haven't noticed that pixelization errors, but size of Blender's ELF
> growed up from 41 to 51 megabytes. Quite noticale, i'll say. I think
> some codecs could be disabled to reduce amount of repended libraries.
> Maybe there's some coding/encoding gurus here who could tell which
> options could be disabled?
>


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Blender developer IRC meeting notes May 8, 2011

2011-05-09 Thread Peter Schlaile
Hi,

> 1) Blender 2.57+, current projects
>
> - Peter Schlaile: are you still available this month? FFmpeg update
> might have issues.

still there. What (additional) issues came up?

There was and are still serious seeking issues caused by the stupid 
ffmpeg seeking code within ffmpeg, which is a) DTS based and b) generally 
assumes more or less a CBR stream on a lot of formats (especially 
prominent ones like MPEGTS streams (HDV)) and will fail horribly, if that 
assumption doesn't hold. (In the case of HDV, dropouts on the tape, which 
are pretty common, are sufficient to make it fail...)

I've finished a seperate indexer, that addresses that problem and builds 
proxies in parallel. I'll commit, when I've found all the bugs in my new code...

Besides that, I don't know of any new issues (I use a very current, ffmpeg 
GIT Feb 19 2011 myself).

Cheers,
Peter


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] SVN commit: /data/svn/bf-blender [36934] trunk/blender: == FFMPEG ==

2011-05-26 Thread Peter Schlaile
... added some version checks again.

Please try again!

Cheers,
Peter

> Hi, this commit breaks building on opensuse 11.4/64 with scons, system
> ffmpeg is 0.62.
> http://www.pasteall.org/21955
> Cheers, mib.

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] SVN commit: /data/svn/bf-blender [36934] trunk/blender: == FFMPEG ==

2011-05-26 Thread Peter Schlaile
... did I already mention, that I start to hate OpenSuse for using some 
"in-between" version...?

Please checkout again.

Cheers,
Peter

> hm, i got another error.
>
>
> source/blender/blenkernel/intern/writeffmpeg.c: In function
> ?start_ffmpeg_impl?:
> source/blender/blenkernel/intern/writeffmpeg.c:744:32: error:
> ?AVIO_FLAG_WRITE? undeclared (first use in this function)
> source/blender/blenkernel/intern/writeffmpeg.c:744:32: note: each
> undeclared identifier is reported only once for each function it appears in
> scons: ***
> [/home/pepo/zwei5new/build/linux2/source/blender/blenkernel/intern/writeffmpeg.o]
> Error 1
> scons: building terminated because of errors.
>
> Thanks for fast reply, mib.
>
>
> Am 27.05.2011, 01:23 Uhr, schrieb Peter Schlaile :
>
>> ... added some version checks again.
>>
>> Please try again!
>>
>> Cheers,
>> Peter
>>
>>> Hi, this commit breaks building on opensuse 11.4/64 with scons, system
>>> ffmpeg is 0.62.
>>> http://www.pasteall.org/21955
>>> Cheers, mib.
>>
>> ___
>> Bf-committers mailing list
>> Bf-committers@blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>
>
> --
>
> ___
> Bf-committers mailing list
> Bf-committers@blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>
>
> End of Bf-committers Digest, Vol 82, Issue 52
> *
>


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] SVN commit: /data/svn/bf-blender [36934] trunk/blender: == FFMPEG ==

2011-05-27 Thread Peter Schlaile
Hi Sergey,

added some additional checks. Hopefully, now it works.

Sorry!

Cheers,
Peter

On Fri, 27 May 2011, Sergey I. Sharybin wrote:

> Hi, Peter!
>
> It's cool that you've removed old code, but i'm unable to compile with ffmpeg 
> 0.6.3 (which is current latest stable version and which would be used for 
> 2.58 release). The same error would happen for libraries from current lib/ 
> repo:
>
> [ 72%] Building CXX object 
> source/gameengine/VideoTexture/CMakeFiles/ge_videotex.dir/VideoFFmpeg.cpp.o
> In file included from 
> /home/nazgul/src/blender/blender/source/gameengine/VideoTexture/VideoFFmpeg.cpp:41:0:
>  
> /home/nazgul/src/blender/blender/source/gameengine/VideoTexture/VideoFFmpeg.h:37:34:
>  
> fatal error: libavutil/parseutils.h: No such file or directory
>
> I hope it'll be easy for you to fix this :)
>
>  Original Message 
> Subject: Re: [Bf-committers] SVN commit: /data/svn/bf-blender [36934] 
> trunk/blender: == FFMPEG ==
> From: Peter Schlaile 
> To: bf-committers@blender.org
> Date: 05/27/2011 05:53 AM
>> ... did I already mention, that I start to hate OpenSuse for using some
>> "in-between" version...?
>> 
>> Please checkout again.
>> 
>> Cheers,
>> Peter
>> 
>>> hm, i got another error.
>>> 
>>> 
>>> source/blender/blenkernel/intern/writeffmpeg.c: In function
>>> ?start_ffmpeg_impl?:
>>> source/blender/blenkernel/intern/writeffmpeg.c:744:32: error:
>>> ?AVIO_FLAG_WRITE? undeclared (first use in this function)
>>> source/blender/blenkernel/intern/writeffmpeg.c:744:32: note: each
>>> undeclared identifier is reported only once for each function it appears 
>>> in
>>> scons: ***
>>> [/home/pepo/zwei5new/build/linux2/source/blender/blenkernel/intern/writeffmpeg.o]
>>> Error 1
>>> scons: building terminated because of errors.
>>> 
>>> Thanks for fast reply, mib.
>>> 
>>> 
>>> Am 27.05.2011, 01:23 Uhr, schrieb Peter Schlaile:
>>> 
>>>> ... added some version checks again.
>>>> 
>>>> Please try again!
>>>> 
>>>> Cheers,
>>>> Peter
>>>> 
>>>>> Hi, this commit breaks building on opensuse 11.4/64 with scons, system
>>>>> ffmpeg is 0.62.
>>>>> http://www.pasteall.org/21955
>>>>> Cheers, mib.
>>>> _______
>>>> Bf-committers mailing list
>>>> Bf-committers@blender.org
>>>> http://lists.blender.org/mailman/listinfo/bf-committers
>>> 
>>> --
>>> 
>>> ___
>>> Bf-committers mailing list
>>> Bf-committers@blender.org
>>> http://lists.blender.org/mailman/listinfo/bf-committers
>>> 
>>> 
>>> End of Bf-committers Digest, Vol 82, Issue 52
>>> *
>>> 
>> 
>> Peter Schlaile
>> ___
>> Bf-committers mailing list
>> Bf-committers@blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>> 
>
>
> -- 
> With best regards, Sergey I. Sharybin
>
>


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Final FFMPEG compatibility fix (hopefully)

2011-05-27 Thread Peter Schlaile
Hi,

just have commited the final compatibility fix for ffmpeg using a seperate
header file that handles all the version cruft. (located in 
intern/ffmpeg/ffmpeg_compat.h )

What makes it very nice: you can now write your code using the latest
API version of ffmpeg GIT and can still be sure, that it will compile on
older versions. (without those nasty #ifdefs all over the place.)

You still have to check though, when you use interface functions, that 
where *not* used within the rest of the code and add compatibility macros
to ffmpeg_compat.h appropriately.

I hope, build doesn't break anymore.

If it doesn't work out, please send me an email. (And: if you do 
so, tell me the exact ffmpeg version you are using, otherwise, I can't 
check!)

Cheers,
Peter

P.S.: Sorry for the inconvenience, should have solved that the first time
   this way.


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Final FFMPEG compatibility fix (hopefully)

2011-05-28 Thread Peter Schlaile
Hi,

please try again with latest SVN.

Cheers,
Peter

> I am maintaining daily builds of blender for Ubuntu in a PPA at Launchpad.
> Sorry to be the odd man out here, but I still have problems building
> blender on Ubuntu Oneiric, which uses libav 0.7~beta2 according to the
> package overview in Launchpad [1].
>
> libavcodec/version.h states:
>
> #define LIBAVCODEC_VERSION_MAJOR 53
> #define LIBAVCODEC_VERSION_MINOR  3
> #define LIBAVCODEC_VERSION_MICRO  0
>
> The builds fail with following error message:
>
> /build/buildd/blender-2.57.1+svn36973~oneiric1/source/blender/blenkernel/intern/writeffmpeg.c:
> In function 'ffmpeg_property_add':
> /build/buildd/blender-2.57.1+svn36973~oneiric1/source/blender/blenkernel/intern/writeffmpeg.c:1056:9:
> error: incompatible types when assigning to type 'int' from type
> 'const union '
> /build/buildd/blender-2.57.1+svn36973~oneiric1/source/blender/blenkernel/intern/writeffmpeg.c:1061:9:
> error: incompatible types when assigning to type 'float' from type
> 'const union '
>
> You can see a full buildlog at [2].
>
> Cheers,
> Ralf
>
> [1] http://packages.ubuntu.com/oneiric/ffmpeg
> [2] 
> https://launchpadlibrarian.net/72565355/buildlog_ubuntu-oneiric-amd64.blender_2.57.1%2Bsvn36973~oneiric1_FAILEDTOBUILD.txt.gz
>
> 2011/5/28 Dalai Felinto :
>> Thanks Peter, it all working now (windows 32 and 64).
>>
>> 2011/5/27 Peter Schlaile 
>>
>>> Hi,
>>>
>>> just have commited the final compatibility fix for ffmpeg using a seperate
>>> header file that handles all the version cruft. (located in
>>> intern/ffmpeg/ffmpeg_compat.h )
>>>
>>> What makes it very nice: you can now write your code using the latest
>>> API version of ffmpeg GIT and can still be sure, that it will compile on
>>> older versions. (without those nasty #ifdefs all over the place.)
>>>
>>> You still have to check though, when you use interface functions, that
>>> where *not* used within the rest of the code and add compatibility macros
>>> to ffmpeg_compat.h appropriately.
>>>
>>> I hope, build doesn't break anymore.
>>>
>>> If it doesn't work out, please send me an email. (And: if you do
>>> so, tell me the exact ffmpeg version you are using, otherwise, I can't
>>> check!)
>>>
>>> Cheers,
>>> Peter
>>>
>>> P.S.: Sorry for the inconvenience, should have solved that the first time
>>> ? ? ? this way.
>>>
>>> 
>>> Peter Schlaile
>>>
>>> ___
>>> Bf-committers mailing list
>>> Bf-committers@blender.org
>>> http://lists.blender.org/mailman/listinfo/bf-committers
>>>
>> ___
>> Bf-committers mailing list
>> Bf-committers@blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>>
>
>
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Final FFMPEG compatibility fix (hopefully)

2011-05-28 Thread Peter Schlaile
> Hm, build is successful but at runtime I get error "The procedure
> entry point RegisterDragDrop could not be located in the dynamic link
> library avcodec-52.dll"

looks like you are on your own here. (Check your build environment, things 
look *seriously* broken, since RegisterDragDrop is some OLE32.DLL function 
according to google. Since I don't have a Windows box around, maybe others 
could help. But: if anyone can help you, he/she most probably will need 
a little bit more information on your build environment. Such things as 
Win32/Win64/ffmpeg version used, compiler used etc. etc.)

Cheers,
Peter


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] New VSE proxies, feedback appreciated

2011-05-29 Thread Peter Schlaile
n the
 camcorder), if someone hits stop, consider those frames, and display
 them as black gaps, frozen pictures.

 "free run (interpolated)" means: if your camcorder is too cheap,
 use record time/date to simulate a real "free run"-mode, usually only
 found in prosumer camcorders.

   * proper variable frame rate support (could be considered as some sort
 of "free run", since the idea has a lot of similarities.

   * audio code doesn't use the same seeking code path, leading to A/V sync
 problems.

If things don't work out for you, using blender with "blender -d" will 
print a lot of debugging output within the seeking code.

Please send me an email with the debugging output pasteall-ed, your 
ffmpeg version, if you want to report problems.

Otherwise: enjoy!

And: I'm eager to hear your feedback, also code / design suggestions.

Cheers,
Peter


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] New VSE proxies, feedback appreciated

2011-05-30 Thread Peter Schlaile
Hi Brecht,

> Am I right that if you don't enable proxies, this patch basically has
> no effect?

Yupe, that's true. It has a very conservative approach.

> In that case it should be pretty safe to commit before
> 2.58, but still think this could use some good testing on complex
> files.

Agreed.

> * Job name in header is "Seq Proxy", maybe name it "Building Proxies"
> or something like that?

updated.

> * On running rebuild proxy, it does not redraw the sequencer header
> immediately, makes it unclear if the job has started.

added an ED_area_tag_redraw (which hopefully does the job, never noticed 
that problem...)

> * Do we need the option to disable building time codes for proxies?
> All 3 are enabled by default, and it seems like there isn't much
> reason not to build them, since you can still afterwards decide to use
> them or not.

Good point, removed from UI.

> * Timecode enum: the items here could use descriptions. Also there is
> no need to repeat "TC" in the item names, and generally abbreviations
> like that should be avoided in the UI. Would just go with "Record
> Run", "Free Run", ..

updated.

> * If timecode options other than Record Run are not supported, I guess
> they should not be exposed in the UI?

Uhm well, I'm still in the hope to include them before merge :)
Otherwise: you are right.

> * These timecode indexes are not used by default, is there a reason for this?

Debugging reasons. Otherwise: no good reason, indeed. (should default to
record run)

> * One thing I don't understand about this is how it can be a proxy
> level setting. Doesn't this also affect non-proxy renders and the
> length of the strip in the sequencer?

You are right.

I first thought, that there is a fixed relationship between 
proxies and the index in use. That isn't really the case (there is only 
a one way dependency: proxies don't really work in general without a time code, 
since you can't address frames correctly with variable FPS e.g.)

So: time code indices could be moved one layer up in theory (say: into 
Strip structure).

What makes this a problem: I wanted to use the same directory 
structure for both, so that proxy files end up in the same directory as 
the index files (otherwise, users have to select *two* directories, one 
for proxies, one for indices, which is really silly.) And: those 
directory paths are currently stored in StripProxy.

So we could shuffle things around in DNA to make that move possible 
(thereby breaking upward compatibility.).

I don't know, if that is worth the effort, since for all real world cases, 
those two features are really nearly always used in tandem.

(Please tell me, if you think otherwise.)

To at least clarify things in UI, I just renamed the label of the Tab from 
"Proxy" to "Proxy / Timecode", and "Use Proxy" to "Use Proxy / Timecode".

> * What happens when you remove a strip while the proxy for it is being
> built? Didn't find any checks for that.

Since we work on (deep) copies of the original strips, nothing will 
happen at all, if you delete a strip.

Blender will continue to build the proxy in the background on the 
hidden copy.

Don't know, if that's a bug or a feature UI wise (I personally consider it 
a feature. If you want to stop proxy building, deleting a strip isn't 
necessarily the brightest thing to do. Hitting the stop button seems 
more *uhm* straight forward :) )

Cheers,
Peter

P.S.: Thanks for reviewing!

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] SVN commit: /data/svn/bf-blender [37537] branches/soc-2011-pepper: == Simple Title Cards for Sequencer ==

2011-06-16 Thread Peter Schlaile
Hi Algorith,

don't you think, we should add some other extensions to
blender, that make it possible, to script something like this with Python?

Problem is: you wrote a *very* special solution for a
very special problem you had, and I'd like to keep the
sequencer core clean and simple.

Would be cool, if you could specify a SCENE as template and only fill in 
parameters.

Add some tweaks to the SCENE strip, that make it optionally render to 
files by default, add template parameters for the SCENE strip and there 
we go.

Then your title cards will end up as ONE additional scene as template and 
template parameters to edit within the strip.

That is in the long run a much better solution, since you give people the 
freedom to make title cards or even fancy title cards as they like.

You can add a Python script, that wraps this all nicely, so that you can 
add some default title cards / whatever. (Which could add a template SCENE 
automagically.)

BTW: I personally use additional scenes within the same file, length 1, 
which get extruded and animated properly. That way, the SCENE is rendered 
once into the memory cache and the cached result is animated (with fades 
etc.)

If I didn't get the problem correctly, just let me know. I really like to 
work out a generic solution for that!

Cheers,
Peter


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Bf-committers Digest, Vol 83, Issue 26

2011-06-19 Thread Peter Schlaile
here
>>> you have a proliferation of "scene" strips in your timeline which are
>>> essentially just there to display text (but outwardly don't
>>> communicate this)
>>>
>>> 5) There's also the issue of a buildup of scene files in the file,
>>> each one for a different slide, making it easy to accidentally delete
>>> the wrong one from the file, and also making it slower to find the
>>> scene to go in and edit its text.  (*2)
>>>
>>> -
>>>
>>> (*1) From your mail below, it sounds like that's something the cache
>>> voodoo might be able to take care of under certain circumstances. As
>>> only a very infrequent user of VSE, I wasn't aware of this.
>>>
>>> (*2) I'm not really convinced about the idea of these template
>>> parameters for the scene strips. It sounds even more like a
>>> specialised hack from user perspective than shoehorning an entire
>>> strip type with some predefined slots where people commonly place text
>>> for common purposes.
>>>
>>> ---
>>>
>>> Anyhow, as an "experimental" feature, this was certainly a good
>>> exercise for seeing how such functionality could look like, and to
>>> generate debate over what use cases for this sort of stuff users have.
>>> (It was also a good exercise in exploring how the sequencer works,
>>> though I might add that the number of places where you have to
>>> redefine how a new strip type is a bit excessive)
>>>
>>> Personally, this is probably sufficient, though maybe with a few more
>>> optional slots for text. If nothing else, I can now save off this
>>> build for future use where necessary ;)
>>>
>>> Perhaps as you suggest, an operator which generates some preset
>>> title-card scene setups would be handy to have. Though it's the
>>> details of how we allow people to tweak the content there which
>>> worries me a bit.
>>>
>>> Regards,
>>> Joshua
>>>
>>> On Thu, Jun 16, 2011 at 10:35 PM, Peter Schlaile 
>>> wrote:
>>>> Hi Algorith,
>>>>
>>>> don't you think, we should add some other extensions to
>>>> blender, that make it possible, to script something like this with
>>> Python?
>>>>
>>>> Problem is: you wrote a *very* special solution for a
>>>> very special problem you had, and I'd like to keep the
>>>> sequencer core clean and simple.
>>>>
>>>> Would be cool, if you could specify a SCENE as template and only fill
>> in
>>>> parameters.
>>>>
>>>> Add some tweaks to the SCENE strip, that make it optionally render to
>>>> files by default, add template parameters for the SCENE strip and there
>>>> we go.
>>>>
>>>> Then your title cards will end up as ONE additional scene as template
>> and
>>>> template parameters to edit within the strip.
>>>>
>>>> That is in the long run a much better solution, since you give people
>> the
>>>> freedom to make title cards or even fancy title cards as they like.
>>>>
>>>> You can add a Python script, that wraps this all nicely, so that you
>> can
>>>> add some default title cards / whatever. (Which could add a template
>>> SCENE
>>>> automagically.)
>>>>
>>>> BTW: I personally use additional scenes within the same file, length 1,
>>>> which get extruded and animated properly. That way, the SCENE is
>> rendered
>>>> once into the memory cache and the cached result is animated (with
>> fades
>>>> etc.)
>>>>
>>>> If I didn't get the problem correctly, just let me know. I really like
>> to
>>>> work out a generic solution for that!
>>>>
>>>> Cheers,
>>>> Peter
>>>>
>>>> 
>>>> Peter Schlaile
>>>>
>>>> ___
>>>> Bf-committers mailing list
>>>> Bf-committers@blender.org
>>>> http://lists.blender.org/mailman/listinfo/bf-committers
>>>>
>>> ___
>>> Bf-committers mailing list
>>> Bf-committers@blender.org
>>> http://lists.blender.org/mailman/listinfo/bf-committers
>>>
>> ___
>> Bf-committers mailing list
>> Bf-committers@blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>>
>
>
>
> -- 
> 
> Fran?ois Tarlier
> www.francois-tarlier.com
> www.linkedin.com/in/francoistarlier
>
>
> --
>
> ___
> Bf-committers mailing list
> Bf-committers@blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>
>
> End of Bf-committers Digest, Vol 83, Issue 26
> *
>


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Sequencer filter stack patch

2010-01-17 Thread Peter Schlaile
Hi Xavier,

> For now I have just 2 types of filters:
> - filters that changes the ImBuf direclty : color balance, makefloat,
> premul, flip
> - filters that change the Iamge trough affine marix : transform (with
> translate rotation around a center point and scalling), flip should be
> changed to use this

just had a look at the patch. First: looks great!

But (small but :) ): you should change some things:

The input_prefilter function was called prefilter, since it did some 
things, you can only do on the raw input data.

That includes:

* deinterlacing (that has to be done *before* the conversion from
   YUV -> RGB takes place and was therefore handled by the imbuf-reader.
   That is a must, since IMB_filtery just throws away one
   field instead of doing correct field interpolation. Which is actually
   impossible, since the underlying YUV-data is gone at that moment... )
* color conversion could be done a lot more efficient and correctly
   if done directly on input data. (read: YUV!)
* flip-Y could be done very efficiently within the input reader, too as
   part of the color conversion step YUV -> RGB.

So: you may want to
a) take deinterlacing out of your filter stack (leave that to the image
reader like it was implemented before)
b) fold all color changing "point functions" (operate one pixel at a time)
into a single function like you did with the transformation matrix.
(maybe using lookup tables for byte input?)

After that, we could take a look, how that can be folded into
the image reader. (optional, but definitely worth the change for
better quality)
To clarify:
now we do:  YUV -> RGB -> color change
We *should* do: YUV -> color change -> RGB

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Sequencer filter stack patch

2010-01-17 Thread Peter Schlaile
Hi Matt,

> > * color conversion could be done a lot more efficient and correctly
> >   if done directly on input data. (read: YUV!)
>
> On the other hand, I think it would also be very good to bring
> Compositor and Sequence Editor tools more in line with each other. If
> it's kept RGB, we can easily have the exact same tools nicely in the
> Compositor too (was planning to do this). I think if we can share more
> code and features between the two, it would be a net benefit for both,
> rather than keeping the sequence  as it is, almost a standalone
> application inside blender.

you can use the same input prefilter functions for the compositor, where 
is the problem?

Maybe that is a misunderstanding:
we don't try to replace the effect stack of the sequencer here!

The filter stack within the sequencer does *pre*-filtering!
And that should be done correctly! (YUV -> RGB is a lossy operation)

Please don't try to "generalize" things here in the notion "RGB is good 
enough for everyone", since that actually doesn't solve the problem.

There should be a path for RGB input, too, but we shouldn't limit it to 
RGB input...

Cheers,
Peter


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Proposal to Remove Features

2010-07-08 Thread Peter Schlaile
Hi Brecht,

* removing Sequencer plugins without having a new interface is not
   very nice(tm). Can't talk about Texture plugins, never used them.

* Fields rendering: should definitely stay in! I use it on a regular basis
   for DVD generation. (Don't know, how you want to generate fields
   afterwards with "specialised tools"... ? )

* Sequencer Glow effect. Haven't used it, but why does it hurt?
   I think, better solution is: make a serious plugin interface, move
   most (all?) of the sequencer effects to plugins. But in that order!
   Don't remove first and let people wonder, how they should make things
   work...

Can't say much about the other points, but I think, that removing features 
should only be done, if automatic porting to a new feature is possible 
within do_versions()...

Cheers,
Peter


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Large sequencer cleanup

2010-07-22 Thread Peter Schlaile
Hi,

the last three days, I cleaned up / rewrote large parts of the sequencer 
code, to accomplish the following things:

* make cfra a float internally to enable sub-frame precision rendering
   with speed effects and make the code a lot clearer
* remove the whole TStripelem mumbo jumbo (is necessary, since cfra is a
   float now, right?)
* replacing it with a hashtable based caching system
* thereby dropping memory usage of the sequencer to a fraction of it's
   current usage (80 MB without using any caches to approximately 5 MB on
   a 3 hour timeline)
* thereby making the cache limitor drop the full imbufs (no more clicking
   on refresh from time to time anymore)
* thereby make cutting a lot snappier, since freeing up the imbufs is just
   about dropping the contents of the current hashtable and not traversing
   really large arrays of null pointers...
* prepare the code for multi-core usage (threaded rendering is disabled
   right now, but the old code was limited to two cores anyways, because of
   this TStripElem hell)
* and: make the code so easy to understand, that your grandma will get
   hold of it (hopefully)

I did some test editing, nothing crashes, no memory leaks, everything 
seems happy, but: can I commit or should I wait some days to let Blender 
2.53 settle?

Patch is here for reference:
http://peter.schlaile.de/blender/seq_cache_rewrite.diff

Only thing missing is frame blending in Movie and Image strips on speed 
effect input (which is still largely broken, so most people won't really 
notice...)

This will come back shortly as an option for Movie and Image strips. 
(Which will be able to adapt to different frame rates directly for free 
on the way, since, cfra is now a float :) ...)

Cheers,
Peter

----
Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] VSE Strip-Wise Rendering

2010-09-28 Thread Peter Schlaile
Hi Leo,

> Looking at the code for the VSE it appears solid, but not very modular,
> nor suitable for effects that need access to more than the current
> frame. Since the tools I have fall into that category ? the anti-shake,
> for example, needs to compute the optical flow for each pair of frames ?
> it is currently near-impossible to port them over in a way that would
> give a good user experience or remain modular enough to be maintainable.

Problem is: the idea behind the VSE is, that it should try to do most / 
all things in realtime.

That doesn't alter the fact, that we need optical flow, so my idea was:
add a optical flow builder, similar to the proxy builder in 2.49 and link 
the generated optical flow files to the strips.

That makes it possible to:

a) use optical flow files generated by other software (like icarus
tracker)
b) use optical flow information from scene files or even Open EXR-files
(I'd think, the vector pass together with the Z-pass could be used for
that)
c) let the optical flow information be calculated in the background,
when none is available and reuse it later for realtime display.

>for each frame:
>for each strip:
>render
>composite
>
> gets turned into:
>
>for each strip:
>for each frame:
>render
>composite

I don't really know, how you want to do that in realtime. But maybe I got 
you wrong.

If you want to display one arbitrary frame in the middle of a Sequencer 
Editing, what exactly does your code actually do?

My understanding of your idea is currently: I'd have to render everything 
from the beginning and that sounds, uhm, slw? :)

> This way, we could do frame rate conversion naturally. We could do
> speedup/slowdown, interpolation, anti-shake, and everything easily.
> Effects that only require access to the current frame would still work
> as a kernel inside a strip.

Since the common base here is optical flow, I'd think, it is better, to 
generate optical flow files and use them with the current design.

Anti-Shake or motion tracking sound like tools, that should run within a 
seperate background rendering process. We could add something to the 
interface, that enables an effect track to have a custom render/bake run. 
Like: please render/bake motion tracking data into fcurves (which will 
feed the entire strip into the effect bake function only once and we use the 
fcurves later for actual frame translation and rotation.).

Since I have to rewrite proxy rendering for 2.5 anyways, we could add 
something like that, too. (The 2.49 proxy builder didn't run in background 
and was more or less a hack.)

Regarding the tools you have written, do you thing, that adding per effect 
strip render/bake would solve your problems? (It could be done in such a 
way, that the bake function could request arbitrary frames from it's input 
track.)

Cheers,
Peter

--
Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] VSE Strip-Wise Rendering

2010-09-29 Thread Peter Schlaile
cies are avoided completely!), but it still isn't 
there, where it should be.

So if you want to take up where I left, go ahead!

> Blender
> should provide an "optical flow" channel for images, much like it has an
> alpha channel - this is where OF can be stored and where it can be read
> from. But I don't think Blender should generate OF data from, for
> example, videos. It can generate it as part of the 3D render, but that
> is because it is a lot easier there.
>
> Third: Even if we don't support dynamically loadable plugins, we need
> clean internal interfaces that allow access to sequences of images, not
> just single frames.
>
> I realize that Blender is 15 years old, very complex, and that it's a
> lot more to it than just to waltz in and doodle up a plugin system. But
> I think it is necessary to try. Like I said, I will develop this in my
> own branch and unless I succeed, you won't hear about it. But I would
> like to gauge the interest for such a modification, because if I
> succeed, I do want my code merged back into the trunk. Forking, or
> maintaining my own build of Blender, is out of the question.
>
>> Regarding the tools you have written, do you thing, that adding per
> effect
>> strip render/bake would solve your problems? (It could be done in such a
>> way, that the bake function could request arbitrary frames from it's
> input
>> track.)
>
> It would work, but I'd lose the real-time feedback and great UI I was
> hoping for, making Blender a lot less attractive as a way of sharing my
> code in a way that is useful for the target audience (artists).

ok. I'm a bit surprised, that you had a need for real time feedback in 
optical flow *generation*, but I must admit, that I haven't worked much 
with it. So: if you got it realtime, and you need instand feedback, you're 
right.

Cheers,
Peter

--
Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] VSE Strip-Wise Rendering

2010-10-01 Thread Peter Schlaile
Hi,

> I have libplugin (http://libplugin.sourceforge.net/intro.html) working in
> blender. Well, extensions and join-points at least, still haven't ported
> over the (semi-)current plugin systems to use it. When I say 'working' I
> mean on linux building with cmake BTW, probably be a chore to get windows
> going since it brings a couple extra libs into blenderdom.

in fact, it doesn't look like the way, a plugin system for blender should 
work IMHO. We are not aiming at *replacing* core functionality, but 
providing a clean way, to add certain types of filters / effect tracks 
etc.

And: in comparising to my proposal, it doesn't solve all the interesting 
things (API version control, symbol dependencies), adds additional 
complexity through XML files and seems to be centered on Linux.

But that's just my first impression, feel free to proof me wrong :)

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] VSE Strip-Wise Rendering

2010-10-05 Thread Peter Schlaile
Hi Dan,

> It has {application,plugin,libplugin} version control and symbol
> dependencies (as I understand them)

after reading again through the documentation, I don't think so. There is 
no thought put into negotation between the application and the plugin, 
what API version should be used for a specific interface.

The XML files are mandatory for libplugin, since otherwise, how should 
the system know, how the join-points are named, if a certain symbol is 
actually ment as a join-point? (I haven't actually found out, how 
libplugin checks, if the parameter declaration still match.
At first sight, it looks like it's left for the reader as an exercise...)
You can't just tell by using dlsym(), since in C, parameters aren't 
encoded into the symbol and C++ ABI name mangling changes between 
versions of C++-compilers, so I don't suggest using that instead...

And that is already the main problem: API versioning is done by just 
filling in the join-points that match (meaning: happen to have the right 
name...) and leaving those alone, that don't.

That means for the plugin author: check every function, you want to use 
from the core, a NULL pointer can sit *anywhere* around.

In fact, for any real-world-use in *Blender*, I don't think, that this 
whole XML-plugin-abstraction layer brings any benefit over just using dlopen(), 
dlsym().

It's even worse: it puts the interface declaration into external XML 
files, so you have to keep header files and external XML files in sync 
(without any help for versioning!) and: if something starts crashing, I 
wish you good luck by finding out, where, when, or what libplugin thought, 
should be replaced.

Please take a look again at my plugin system proposal, that actually tries 
to reduce symbol dependencies between core and plugins and also does API 
versioning in a proper way, that doesn't force the plugin author into 
checking every function pointer. I tried very hard, to make writing a 
plugin very seemless, though, every API is explicitly imported with a 
certain version.

And: I'm *really* a fan of well thought out external plugin APIs, that 
aren't changed as quickly as we release daily SVN snapshots.

I think, it is also every important, that *if* we change APIs, there 
should be a backward compatible way in requesting the old version of an API.
(It's not just: let's compare versions, GNAA, doesn't match. It's really 
about version negotiation!).

And: the API should be placed in C header files, so that the compiler can 
check for us, if something has gone awry. External XML files used for 
function declarations are really pretty much a disaster plan.

> You say 'centered on Linux' like that's a bad thing...

My personal opinion as a Debian-only user doesn't matter here. Point is: 
Blender is multiplatform, so the plugin system should better be, too...

Don't want to discourage your effort, but I still don't think, that 
libplugin is really part of the solution.

Cheers,
Peter


Peter Schlaile

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] VSE Strip-Wise Rendering

2010-10-27 Thread Peter Schlaile
Hi Leo,

> I've read through the plugin proposal you wrote way back, and I find no
> real issue with it. Why wasn't it implemented? Was it just lack of
> resources, or am I missing something?

lack of resources and some unresolved issues with IPOs. The problem was, 
that I thought it was a good idea to have the plugin register IPO curves 
(which isn't that easy using the 2.49 IPO curve system) and: I thought, 
those IPOs should be calculated in advance before calling the plugin. 
(Also: not very nice with the old system and still interesting with the 
new anim-system, since the sequencer has it's own cfra(!)).

And the most serious problem: I myself actually don't use the plugin 
interface, so: there was no real need, to get things done on a deadline.

Long story: ignore most of the above (since those are 2.49 issues), we 
should add a RNA interface and then things could be rolling pretty easily :)

(and: the UI interface should be removed, since Blender 2.5 does that 
completely in python.)

Cheers,
Peter

P.S.: Are you attending blender conference? We could discuss details at
   the conference, if you are there.

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] FFMPEG properties from Blender presets overridden by defaults?

2010-11-21 Thread Peter Schlaile
Hi Randall,

please file a bug report, since ffmpeg custom properties were working 
in Blender 2.49 .

I have to take a deeper look into the code to see, how they should 
reappear in Blender 2.5 (the UI in 2.49 was aehm somewhat strange :) ) 
and filing a bug report, is a way to make sure, I don't forget :)

Cheers,
Peter

> My goal is to encode H264 Quicktime files that are compatible with
> Apple's Quicktime Player (should not be too much to ask, right?). Before
> 2.5, I could do this by tweaking the individual ffmpeg options that were
> exposed in the UI after selecting the H264 preset. That preset was
> clearly designed for making H264 AVI files, but it would work for
> Quicktime after changing the container format and tweaking ffmpeg
> options, especially getting rid of "flags:loop".
>
> 2.5 doesn't expose those options in the UI. They seem to be hardcoded in
> source/blender/blenkernel/intern/writeffmpeg.c under case
> FFMPEG_PRESET_H264. They mirror the options in libx264-default.preset
> that is distributed with ffmpeg. So I'm editing the settings in
> writeffmpeg.c to see if I can get Blender to write a QT
> Player-compatible Quicktime movie. No matter what I do to those
> settings, Blender's stderr stream shows that it is still using the
> defaults, not the settings I modified.
>
> The settings seem to be stored in a RenderData->ffcodecdata.properties,
> but I get lost (I'm not a programmer) as I try to figure out where they
> come from, since they obviously don't come from the FFMPEG_PRESET_H264
> settings I edited. Any help?
>
> Thanks,
> Randall
>
> p.s.: Please don't suggest rendering an image sequence, then encoding
> from the command line! That's what I do for 3D renders, but I want to
> encode output from the VSE. It seems stupid to render an image sequence
> from the VSE, because it would just duplicate the same frames I already
> have on disk but with different frame numbers, wasting a lot of disk
> space.
>
>
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

>  1. Write a "VSE 2" and create all-new structures?

this will break compatibility with older versions of blender. Should only 
be done as a last resort and if you *really* know, what you are doing.

>  2. Create some kind of "compatibility loader" or data import filter
> that converts the old data to the new, as far as possible? That is, we
> read old and new formats, but only write new format.

that is *always* necessary, otherwise, you can't open old files or make 
sure, that on load of an old file, your new structure elements are 
initialized properly. This is done in doversions() in readfile.c .

>  3. Convert the old structs to the new in-memory? That is, we read and
> write the old format, maybe with some back-compatible changes, but use a
> new format internally?

nope. After each editing operation, DNA data has to be in sync, since DNA 
load/save is also used on undo operations(!).

I'd also suggest, that you first try to make sure, that you *have* to 
change something, and why. Since, you guessed it, you will most likely 
make some people unhappy, that want to open their new .blend file with an 
old version and see things broken all over the place.

So I'm a bit wondering, what you want to change?

I tried to understand the blog post you linked some days ago.

To quote your blog: (disadvantages of the current system)

> 1.1. Disadvantages

> The disadvantages come when we wish to perform any kind of processing 
> that requires access to more than just the current frame. The frames in the 
> sequencer aren't just random jumbles of image data, but sequences of 
> images that have a lot of inter-frame information: Object motion, for 
> example, is something that is impossible to compute given a single frame, 
> but possible with two or more frames.

> Another use case that is very difficult with frame-wise rendering is 
> adjusting the frame rate of a clip. When mixing video from different 
> sources one can't always assume that they were filmed at the same frame 
> rate. If we wish to re-sample a video clip along the time axis, we need 
> access to more than one frame in order to handle the output frames that 
> fall in-between input frames - which usually is most of them.

To be honest: you can't calculate the necessary optical flow data on the 
fly, and: most likely, people want to have some control over the 
generation process. (Maybe they just want to use externally generated 
OFLOW files from icarus?)

To make a long story short: we should really just add a seperate 
background rendering job, to add optical-flow tracks to video tracks, just 
like we did with proxies, only in the background with the new job system 
and everything should be fine. (For scene tracks or OpenEXR-sequences with 
a vector pass, there even is already optical flow information available 
for free(!) )

In between frames should be handled with float cfras (the code is already 
adapted at most places for that) and the new additional mblur parameters.

That has the additional advantage, that you can actually *calculate* real 
inbetween frames in scene tracks.

For other jobs, like image stabilisation, you should just add similar 
builder jobs. (Which most likely don't have to write out full frames, but 
just generate the necessary animation fcurves.)

The implications of your track rendering idea is - scary.
Either you end up with a non-realtime system (since you have to calculate 
optical flow information on the fly in some way, which is, to my 
knowledge, not possible with current hardware) or you have to render 
everything to disk - always.

I, as a user, want to have control over my diskspace (which is very 
valuable, since my timelines are 3 hours long, and rendering every 
intermediate result to disk is *impossible*!).

Or, to put it another way: please show a case to me, that *doesn't* work, 
with a simple "background builder job" system, where you can add arbitrary 
intermediate data to video, meta or effect tracks. Having to access 
multiple frames at once during playback *and* doing heavy 
calculation on them doesn't sound realtime to me by definition, and that 
is, what Ton told me, the sequencer should be: realtime. For everything 
else, the Compositor should be used.

You could still use RenderMode: (CHUNKED, SEQUENTIAL and FULL) to make 
that background render job run in the most efficient way. But it is still 
a background render job, which is seperated from the rest of the pipeline.

As always, feel free to proof me wrong. If I got it correctly, your 
interface idea looks like a good starting point for a background builder 
system interface. 
So probably, if you convince everyone, that this is the best thing to do for 
playback, too, we might end up promoting your builder interface to the 
preview renderer, who knows?

BTW:
I'm currently rewriting the sequencer render pipeline using a generic 
Imbuf-Render-Pipeline system, which will move some things around, 
especially all those little prefilterin

Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

> On 2010-11-25 19:45, Peter Schlaile wrote:
>> Or, to put it another way: please show a case to me, that *doesn't*
>> work, with a simple "background builder job" system,
>
> That's why I need to involve the main rendering pipeline. I wish I
> didn't have to.

hmm, that's maybe a misunderstanding. I was talking about using the job 
system of blender 2.5 (which just forks another job within blender - not 
an external system/program!). So: still complete access to UI.

> You've asked this once before (way back), I replied in:
> http://lists.blender.org/pipermail/bf-committers/2010-September/028825.html
>
> and you thought:
> "that sounds indeed usefull"
> http://lists.blender.org/pipermail/bf-committers/2010-September/028826.html

again, maybe a misunderstanding here. In that old post, I was more 
thinking of rendering seperate tracks in advance for prefetch rendering.
Not: making CPU heavy stuff run in the previews.
(But I didn't make that point really clear, that's true.)

> Short-short summary: The system need not do it the naive way and write
> out every frame. We can also optimize away a lot frames being held
> concurrently in memory by using smart algorithms.
>
> The current state of the prototype code is that *nothing* is being
> written to disk, and it never renders more frames than absolutely
> needed. Eventually I might need to include the option of writing out
> some things to disk but I consider that a last resort, and only for
> operations that the user *knows* will cost a lot of disk. This, however,
> is not a consequence of the strip-wise rendering algorithm itself, but
> of the processing we want to do.

ok, point taken. I'm still a bit unsure about the CPU-heavy stuff, but I 
have to admit, that you could always add a proxy, if things start to get 
slow.

Regarding your interface prototype: your interators should take a float 
increment parameter. (There isn't really a well defined "next" frame in 
float precision scene-rendering...)

Otherwise: this could really work.

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

>> Regarding your interface prototype: your interators should take a float
>> increment parameter. (There isn't really a well defined "next" frame in
>> float precision scene-rendering...)

>I decided against that due to the complications it resulted in - mostly
>because it became very difficult to get all frames to align in time when
>round-off errors may affect the float cfra parameter depending on how it
>was calculated. (It was also difficult for movie clip sources, again due
>to rounding errors, where you could end up on the wrong frame.) It was
>easier to just pretend, in the VSE, that each sequence was continuous,
>but sampled at a fixed rate. (So the "frame rate" should really be a
>"field rate".) That way, the only time we risk that the frames don't
>line up is when we do framerate conversion - and everyone kinda expects
>them to not line up then.

uhm, ouch. OK, do it differently, tell the next()-iterator the 
absolute next-cfra not the increment, but please make it a float, because...

> I'm just guessing regarding the float cfra parameter: Is it for motion blur?
... it's not about motion blur, it's about retiming.

If CFRA is float, you can retime a scene strip afterwards and have it 
render subframe precision frames (read: extrem slowdowns), which are 
done the real way, not using fake interpolation.

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-25 Thread Peter Schlaile
Hi Leo,

> Ah, ok.
>
> I'd still try to stick with integer frames and a parameterless next().
> Allowing the client to specify the advance in the next() method makes it
> too much of a random-access method (there is no guarantee that the
> advance is to the "next frame", which is the whole purpose of the
> iterator interface).
>
> I'd do it this way:
>
> Suppose we have a scene where we want normal speed for frames 1-10, an
> extreme slowdown for 11-12 and normal speed from 13-20.
>
> Strip 1: Scene, frames 1-10. This strip covers frames 1-10 in the VSE.
> Strip 2: Scene, frames 11-12, with a framerate of 1/100th of the output
> framerate. This strip covers frames 11-211 in the VSE.
> Strip 3: Scene, frames 13-20. This strip covers frames 212-219 in the VSE.

sorry, that won't work. One can retime using fcurves. Which brings me to 
the point: what was the sense in dropping the random access interface 
again?

The imbuf reader has also a random access interface, but internally keeps 
track of the last fetched frame and only reseeks on demand.

So you get: fast output, if you fetch the next frame and a slow reseek if 
you don't, which will work nicely in all relevant cases, since your code 
will be optimized to work on consecutive frames as much as it can.

Cheers,
Peter

Just do it internally like this on a movie strip which has a fixed 
framerate:

class movie_iterator : public iterator {
public:
Frame fetch(float cfra) {
if (cfra == last_cfra + 1) {
return next();
}

seek(cfra - preseek);

for (int i = 0; i < preseek; i++) {
next();
}

return next();
}
private:
Frame next() {
return next_frame using and updating last_frame_context;
}
void seek(float cfra) {
...
}

float last_cfra;
context last_frame_context;
}

where as a scene strip does:

class scene_iterator : public iterator {
public:
Frame fetch(float cfra) {
    setup_render(cfra);
return render();
}
}



Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] S-DNA and Changes

2010-11-26 Thread Peter Schlaile
Hi Leo,

> Then you can have one strip with an fcurve to map from VSE output frames
> to scene frames:
>
> Strip 1: Scene, frames 1-20, with an fcurve that maps from VSE frames
> (iterated over using next()) to scene frames. It covers frames 1-219 in
> the VSE.
>
> If I may modify your code a little:
>
> class scene_iterator : public iterator {
> public:
>   Frame next () {
>// fcurves go in map_vse_frame_to_cfra
>float cfra = map_vse_frame_to_cfra (nextFrame);
>   setup_render(cfra);
>++nextFrame;
>   return render();
>   }
>int nextFrame;
> }

and again, that doesn't work with fcurves and gets really nasty 
with stacked speed effects.

map_vse_frame_to_cfra() is *really* a non-trivial function!

> VSE only sees a sequence of discrete frames

uhm, why is that exactly the case? In fact, currently it renders 
internally with floats.

> - which is precisely what its domain model should look like,
> because video editing is about time-discrete sequences, not continuous.

again, why? The point behind making cfra continous was, that the *input* 
strip can make it's best afford to do inter-frame interpolation or do 
in-between rendering.

It depends heavily on the *input* strip, how that is done best.

So: no, I *strongly* disagree with your opinion, that the "VSE sees a 
sequence of discrete frames". In fact, it doesn't!

> The Blender scene is a  continuous simulation - having a float cfra 
> makes sense, because time is continuous there. In the VSE the domain 
> objects are discrete in time. Having a float cfra makes no sense.

as stated above, I disagree.

>> Which brings me to the point: what was the sense in dropping the random
>> access interface again?
>>
>> The imbuf reader has also a random access interface, but internally keeps
>> track of the last fetched frame and only reseeks on demand.
>
> It is always possible to wrap a sequential interface in a random-access
> interface, or vice versa. The purpose of dropping the random access
> interface was to be able to write code that didn't have to deal with two
> cases - you'd know that you'll always be expected to produce the next
> frame, and can code for that. Less to write, less to go wrong.

uhm, you always will need a fetch() and a seek() function, so where 
exactly does your idea make things simpler?

> Clients of the code know that they can iterate over *every* frame in the
> sequence just by calling next(). With a random access interface -
> especially one that uses floats for indexing - you'll always worry if
> you missed a frame, or got a duplicate, thanks to rounding errors.

uhm, as stated above, next() isn't really defined in your sense.

You can define a version, that does CFRA + 1 if you like. If that is 
really helpfull is another question.

In fact, the next cfra for a given track will be defined by the topmost 
speed effect fcurve and then calculated down the stack. That won't break 
your initial idea of changing the order in which frame calculation takes 
place, it only reflects the fact, that next() isn't that easy to calculate 
if you do retiming in a creative way.

So: yes, next() won't be easy to calculate in advance for a given track, 
but yes: that is a fundamental problem, if we allow stacked retiming with 
curves. Even if you do retiming with simple factors, you will run into the 
problem, that if the user speeds up a track with say a factor of 100 you 
probably don't want to blend *all* input frames into the output frame but 
limit the sampling to say 10 inbetween frames. (That's the way Blender 
2.49 does it and Blender 2.5 will do it soon using the new SeqRenderData 
parameters.)

Cheers,
Peter


Peter Schlaile
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers