El Mon, 2 Jan 2012 12:55:09 -0600
Dennis Gilmore <den...@ausil.us> escribió:
> El Sat, 24 Dec 2011 23:25:56 -0600
> Dennis Gilmore <den...@ausil.us> escribió:
> > When we were testing build were happening really fast, once we
> > loaded up the build jobs things have become really slow
> > 
> > http://arm.koji.fedoraproject.org/koji/taskinfo?taskID=238672
> > started at 1:19 utc and at 5:18 utc  four hours later the
> > buildrequires are still being installed. australia is completely io
> > bound. i think we need to see how we can spread the io load. ~30
> > hosts reading and writing to the 4 spindles just saturates the disk
> > io.  i guess options are find a way to add more spindles.
> > move /var/lib/mock to sdcard, see if we can get some kind of san
> > that has alot of smaller fast disks. get some 2.5" usb drives one
> > for each builder. some other idea?  is there anyway we could add
> > 4-8 more disks to australia. the size of the matters little.
> > gaining more iops by adding more spindles would help.
> > 
> > Just throwing out what im seeing. lets see what ideas we can come up
> > with.  performance is better than before. and seneca has done a
> > great job and put a lot of hard work into the reorg and im thankful
> > for that. We just have another bottleneck to address.
> > _______________________________________________
> > arm mailing list
> > arm@lists.fedoraproject.org
> > https://admin.fedoraproject.org/mailman/listinfo/arm
> 
> http://arm.koji.fedoraproject.org/koji/taskinfo?taskID=245410 has been
> running 10 hours now. its inited the chroot and installing the srpm.
> it is a big srpm but still. performance on the hfp builders while not
> great is better. when looking at a sfp builder the mock process is
> using 100% cpu and in a D state. there is one of two issues here. the
> hfp builders are running a slightly older kernel. so it could be a
> kernel ext3 nfs or loop device issue, upgrading a hfp builder for
> testing would confirm it. or its some issue with decompresion of the
> rpms. I do think that disk io on the nfs server is partly to blame.
> the box sits at a load of 9 all the time with 30% or so io wait. Id
> like to take a layer or 2 of the complexity out. switching over to
> nfsv4 we should be able to run mock on it. im going to do some
> compression tests with the latest 2.6.41.6 kernel. 

further followup ive inited chroots on my pandaboard. ive used 2
sdcards both are 32gb transcend class 10 cards both with the same mock
configs with caching disabled. like koji.

[dennis@fedora-armhfp ~]$ time mock -r fedora-15-armhfp --init
INFO: mock.py version 1.1.18 starting...
State Changed: init plugins
INFO: selinux enabled
State Changed: start
State Changed: lock buildroot
State Changed: clean
INFO: chroot (/var/lib/mock/fedora-15-arm) unlocked and deleted
State Changed: unlock buildroot
State Changed: init
State Changed: lock buildroot
Mock Version: 1.1.18
INFO: Mock Version: 1.1.18
INFO: calling preinit hooks
State Changed: running yum
State Changed: unlock buildroot
INFO: Installed packages:
State Changed: end

real    8m14.626s
user    2m8.930s
sys     0m21.234s


[dennis@panda01-sfp ~]$ time mock -r fedora-15-arm --init
INFO: mock.py version 1.1.18 starting...
State Changed: init plugins
INFO: selinux enabled
State Changed: start
State Changed: lock buildroot
State Changed: clean
INFO: chroot (/var/lib/mock/fedora-15-arm) unlocked and deleted
State Changed: unlock buildroot
State Changed: init
State Changed: lock buildroot
Mock Version: 1.1.18
INFO: Mock Version: 1.1.18
INFO: calling preinit hooks
State Changed: running yum
State Changed: unlock buildroot
INFO: Installed packages:
State Changed: end

real    10m37.586s
user    2m13.969s
sys     0m20.406s

so i think its pretty safe to assume that compression is working ok. so
it comes down to ext3, loopback device, kernel, or something else.


Dennis
_______________________________________________
arm mailing list
arm@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/arm

Reply via email to