[ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Sandro Bonazzola
http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console

15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Eyal Edri
looking at the slave it has 12G free:

[eedri@fc20-vm06 ~]$ df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/vda318G  4.9G   12G  30% /
devtmpfs4.4G 0  4.4G   0% /dev
tmpfs   4.4G 0  4.4G   0% /dev/shm
tmpfs   4.4G  368K  4.4G   1% /run
tmpfs   4.4G 0  4.4G   0% /sys/fs/cgroup
tmpfs   4.4G  424K  4.4G   1% /tmp
/dev/vda193M   71M   16M  83% /boot

can't know which slave it used since 66 build is no longer there.

e.


- Original Message -
> From: "Sandro Bonazzola" 
> To: "Fabian Deutsch" , "infra" 
> Sent: Tuesday, June 16, 2015 11:43:26 AM
> Subject: [ticket] not enough disk space on slaves for building node appliance
> 
> http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
> 
> 15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
> 
> 
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> 
> 
> 

-- 
Eyal Edri
Supervisor, RHEV CI
EMEA ENG Virtualization R&D
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Fabian Deutsch
- Original Message -
> looking at the slave it has 12G free:
> 
> [eedri@fc20-vm06 ~]$ df -h
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/vda318G  4.9G   12G  30% /
> devtmpfs4.4G 0  4.4G   0% /dev
> tmpfs   4.4G 0  4.4G   0% /dev/shm
> tmpfs   4.4G  368K  4.4G   1% /run
> tmpfs   4.4G 0  4.4G   0% /sys/fs/cgroup
> tmpfs   4.4G  424K  4.4G   1% /tmp
> /dev/vda193M   71M   16M  83% /boot
> 
> can't know which slave it used since 66 build is no longer there.

I can tell that this happens with seevral slaves currently.
In the last 10 days or so we had only a few successfull builds.

Do we have a label for slaves with big disks?

- fabian

> e.
> 
> 
> - Original Message -
> > From: "Sandro Bonazzola" 
> > To: "Fabian Deutsch" , "infra" 
> > Sent: Tuesday, June 16, 2015 11:43:26 AM
> > Subject: [ticket] not enough disk space on slaves for building node
> > appliance
> > 
> > http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
> > 
> > 15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
> > 
> > 
> > --
> > Sandro Bonazzola
> > Better technology. Faster innovation. Powered by community collaboration.
> > See how it works at redhat.com
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> > 
> > 
> > 
> 
> --
> Eyal Edri
> Supervisor, RHEV CI
> EMEA ENG Virtualization R&D
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread David Caro

The requirements have changed or something?

In the cleanup I see that the slave has 13GB free prior to starting to do
anything (as usual)

10:40:09 /dev/vda318G  4.6G   13G  28% /

On 06/16, Sandro Bonazzola wrote:
> http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
> 
> 15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D

Tel.: +420 532 294 605
Email: dc...@redhat.com
Web: www.redhat.com
RHT Global #: 82-62605


pgpYaOtbnUp2c.pgp
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Fabian Deutsch
- Original Message -
> 
> The requirements have changed or something?

No, the requirements did not change.

But as discussed previously, the build process involves several images (because 
they need to be converted from qcow2 to ova), and this can require a lot of 
space.

We are currently very optimistic that 9 GB of free space work to build an image 
of (unspare) 5GB in size.

To have freedom we need 20GB of free space.

- fabian

> In the cleanup I see that the slave has 13GB free prior to starting to do
> anything (as usual)
> 
> 10:40:09 /dev/vda318G  4.6G   13G  28% /
> 
> On 06/16, Sandro Bonazzola wrote:
> > http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
> > 
> > 15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
> > 
> > 
> > --
> > Sandro Bonazzola
> > Better technology. Faster innovation. Powered by community collaboration.
> > See how it works at redhat.com
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> 
> --
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R&D
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> Web: www.redhat.com
> RHT Global #: 82-62605
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread David Caro

We are pending a major rebuild of all the slaves, the new ones should have 40G
disks, that would solve it, but that's been planned for so long that I'm not
sure it's something worth waiting for if it's blocking you.

Adding a tag to some slaves that 'should' have more space is something that
will create maintenance overload in the future (ad we already have too much) so
avoidable whenever possible.

How many slaves would you need for the jobs to run normally? (how many jobs do
you run daily?)

On 06/16, Fabian Deutsch wrote:
> - Original Message -
> > 
> > The requirements have changed or something?
> 
> No, the requirements did not change.
> 
> But as discussed previously, the build process involves several images 
> (because they need to be converted from qcow2 to ova), and this can require a 
> lot of space.
> 
> We are currently very optimistic that 9 GB of free space work to build an 
> image of (unspare) 5GB in size.
> 
> To have freedom we need 20GB of free space.
> 
> - fabian
> 
> > In the cleanup I see that the slave has 13GB free prior to starting to do
> > anything (as usual)
> > 
> > 10:40:09 /dev/vda318G  4.6G   13G  28% /
> > 
> > On 06/16, Sandro Bonazzola wrote:
> > > http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
> > > 
> > > 15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
> > > 
> > > 
> > > --
> > > Sandro Bonazzola
> > > Better technology. Faster innovation. Powered by community collaboration.
> > > See how it works at redhat.com
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> > 
> > --
> > David Caro
> > 
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R&D
> > 
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> > 

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D

Tel.: +420 532 294 605
Email: dc...@redhat.com
Web: www.redhat.com
RHT Global #: 82-62605


pgpgYrJxC1o4A.pgp
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra