I report here on two experiments with video, that call for enhancements.
First, I wrote a slideshow script. It displays images using a playlist
whose lines are of the form "annotate:duration=3:filename". Files can
be jpg, png, gif, etc. There are two problems:
(1) I have to give a font path explicitly, the default doesn't work.
You'll have to change it too. By the way, the error doesn't help at all.
(2) It's slow. The scrolling and sound freeze from time to time.
Also, it probably still segfaults sometimes...
<<<<<
def images
video.add_text(metadata="filename","<no filename>",
font="/home/dbaelde/lang/ml/sdl/font.ttf",size=12,
# speed=0, x=5, y=5,
playlist("images.lst"))
end
images = video.fade.in(duration=1.,video.fade.out(duration=1.,images))
def sound
playlist("~/media/audio/ToP")
end
mksafe = fun (s) -> fallback(track_sensitive=false,[s,blank()])
# Don't use mksafe on the video, I think RGB.blank is dubious (no
blocking_section)
# Use a separate clock to avoid super slow sound (SDL is too slow)
# When you close the SDL window, the output.dummy keeps streaming images, but
# the audio doesn't freeze anymore.
clock(output.dummy(fallible=true,output.sdl(fallible=true,images)))
output.ao(fallible=false,mksafe(sound))
>>>>>
The second experiment shows quite clearly the slowness of SDL output.
You can try before my recent commit, it's even worse. Now, SDL is at
least better than Graphics. This one-liner plays only once an image
with a duration of 10 seconds. Then you can read in the logs the
duration between the FOO message and the shutdown. You can change the
output from output.sdl(..) to output.graphics, output.dummy, or
output.file(%theora,"foo.ogv",...).
src/liquidsoap scripts/utils.liq 'set("root.sync",false)
id.video_only(output.sdl(fallible=true,on_stop=shutdown,on_track(fun(_)->log("FOO"),once(single(argv(1))))))'
-- annotate:duration=10:/home/dbaelde/media/images/bg/whealan_bestof_004.jpg
It used to take 12 seconds to play this 10 second "video", both with
Graphics and SDL. Now it's down to 3 or 4, slightly more without the
ANYFORMAT in SDL output. With output.dummy it takes 2 seconds (i.e.
this is the time of decoding the image and producing the raw stream).
We might be able to gain even more by moving sdl_utils conversion
loops to C code. But there seems to be a deeper problem... how can we
get even close to mplayer's performant use of SDL?
If anyone wants to investigate this with me, it'd be cool. These
simple examples show that our operators are not ready for reasonable
use :\
--
David
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Savonet-devl mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/savonet-devl