pbuilder performance

2012-11-17 Thread Enrico Weigelt
Hi folks,

I'm regularily building quite huge packages with large dependencies,
eg. libreoffice, using git-buildpackage. And it's really slow.

Is there any way for speeding up the builds ?

I'm already using cowbuilder, but it only seems to be able to use
an existing base system tree, while still needs installing all
the dependencies one by one.

Is it possible to do some similar logic with dependencies ?
(something like an tweaked dpkg that fetches everything from
per-package directories instead *.dpkg files and just hardlink
instead of copying) ?


cu
-- 
Mit freundlichen Grüßen / Kind regards 

Enrico Weigelt 
VNC - Virtual Network Consult GmbH 
Head Of Development 

Pariser Platz 4a, D-10117 Berlin
Tel.: +49 (30) 3464615-20
Fax: +49 (30) 3464615-59

enrico.weig...@vnc.biz; www.vnc.de 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: pbuilder performance

2012-11-17 Thread Dmitrijs Ledkovs
On 17 November 2012 18:33, Enrico Weigelt enrico.weig...@vnc.biz wrote:
 Hi folks,

 I'm regularily building quite huge packages with large dependencies,
 eg. libreoffice, using git-buildpackage. And it's really slow.

 Is there any way for speeding up the builds ?

 I'm already using cowbuilder, but it only seems to be able to use
 an existing base system tree, while still needs installing all
 the dependencies one by one.

 Is it possible to do some similar logic with dependencies ?
 (something like an tweaked dpkg that fetches everything from
 per-package directories instead *.dpkg files and just hardlink
 instead of copying) ?


* use eatmydata
* use local caching proxy (apt-cacher-ng)

eatmydata - reduces IO by faking fsync which speeds up dpkg install a
lot (note this may result in e.g. test-suite failures which rely on
fsync)

apt-cacher-ng starts a local proxy on your machine, which can be used
as an apt-proxy or even as a full mirror, if it doesn't have
packages cached it simply gets them over the network. For a common set
of regular builds that greatly speeds up things.

use sbuild, it's faster. there is a handy mk-sbuild utility in
ubuntu-dev-tools that can create schroots for you (it even has a handy
eatmydata option).

you either want a clean environment, or you don't ;-) so you do have
to pay for a clean room.

Regards,

Dmitrijs.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: pbuilder performance

2012-11-17 Thread Paul Graydon

On 11/17/2012 09:10 AM, Dmitrijs Ledkovs wrote:

On 17 November 2012 18:33, Enrico Weigelt enrico.weig...@vnc.biz wrote:

Hi folks,

I'm regularily building quite huge packages with large dependencies,
eg. libreoffice, using git-buildpackage. And it's really slow.

Is there any way for speeding up the builds ?

I'm already using cowbuilder, but it only seems to be able to use
an existing base system tree, while still needs installing all
the dependencies one by one.

Is it possible to do some similar logic with dependencies ?
(something like an tweaked dpkg that fetches everything from
per-package directories instead *.dpkg files and just hardlink
instead of copying) ?


* use eatmydata
* use local caching proxy (apt-cacher-ng)

eatmydata - reduces IO by faking fsync which speeds up dpkg install a
lot (note this may result in e.g. test-suite failures which rely on
fsync)

apt-cacher-ng starts a local proxy on your machine, which can be used
as an apt-proxy or even as a full mirror, if it doesn't have
packages cached it simply gets them over the network. For a common set
of regular builds that greatly speeds up things.

use sbuild, it's faster. there is a handy mk-sbuild utility in
ubuntu-dev-tools that can create schroots for you (it even has a handy
eatmydata option).

you either want a clean environment, or you don't ;-) so you do have
to pay for a clean room.

Regards,

Dmitrijs.

If you're routinely rebuilding packages, you may see some benefit in 
using ccache as well.  There are some instructions in the sbuild page on 
how to utilise it: http://wiki.debian.org/sbuild


Paul

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: RTMP/HLS in Nginx for 13.04

2012-11-17 Thread Reinhard Tartler
On Wed, Nov 14, 2012 at 1:20 PM, John Moser john.r.mo...@gmail.com wrote:

 Nod.  I'll have to work this out.  However it crashes in libavformat.so when
 using HLS, causing a segfault.  The developer (Roman) suggests using a newer
 version of ffmpeg (and NOT using libav); I'll have to try with newer libav.
 I've tried with the ffmpeg in PPA:

   https://launchpad.net/~jon-severinsson/+archive/ffmpeg

 but 0.10 is still too old (March?!).  Was hoping to test with ffmpeg 1.0
 and/or the latest libav.

See http://launchpad.net/~motumedia/+archive/libav9-raring/

Does your crash happen with that version of libavcodec as well? if
yes, please come to #libav-devel and let's discuss it there.

PS: As you can see, there is still quite some work to do until we can
have libav9 in raring. Help on that more than welcome, most of the
packages are rather easy to fix (missing #includes, update to use
newer API, etc.)


-- 
regards,
Reinhard

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss