Re: [ovirt-devel] [vdsm] strange network test failure on FC23
> On 29 Nov 2015, at 17:34, Nir Sofferwrote: > > On Sun, Nov 29, 2015 at 6:01 PM, Yaniv Kaul wrote: > > On Sun, Nov 29, 2015 at 5:37 PM, Nir Soffer wrote: > >> > >> On Sun, Nov 29, 2015 at 10:37 AM, Yaniv Kaul wrote: > >> > > >> > On Fri, Nov 27, 2015 at 6:55 PM, Francesco Romani > >> > wrote: > >> >> > >> >> Using taskset, the ip command now takes a little longer to complete. I fail to find the original reference for this. Why does it take longer? is it purely the additional taskset executable invocation? On busy system we do have these issues all the time, with lvm, etc…so I don’t think it’s significant > >> > > >> > > >> > Since we always use the same set of CPUs, I assume using a mask (for 0 & > >> > 1, > >> > just use 0x3, as the man suggests) might be a tiny of a fraction faster > >> > to > >> > execute taskset with, instead of the need to translate the numeric CPU > >> > list. > >> > >> Creating the string "0-" is one line in vdsm. The code > >> handling this in > >> taskset is written in C, so the parsing time is practically zero. Even > >> if it was non-zero, > >> this code run once when we run a child process, so the cost is > >> insignificant. > > > > > > I think it's easier to just to have it as a mask in a config item somewhere, > > without need to create it or parse it anywhere. > > For us and for the user. > > We have this option in /etc/vdsm/vdsm.conf: > > # Comma separated whitelist of CPU cores on which VDSM is allowed to > # run. The default is "", meaning VDSM can be scheduled by the OS to > # run on any core. Valid examples: "1", "0,1", "0,2,3" > # cpu_affinity = 1 > > I think this is the easiest option for users. +1 > > >> > However, the real concern is making sure CPUs 0 & 1 are not really too > >> > busy > >> > with stuff (including interrupt handling, etc.) > >> > >> This code is used when we run a child process, to allow the child > >> process to run on > >> all cpus (in this case, cpu 0 and cpu 1). So I think there is no concern > >> here. > >> > >> Vdsm itself is running by default on cpu 1, which should be less busy > >> then cpu 0. > > > > > > I assume those are cores, which probably in a multi-socket will be in the > > first socket only. > > There's a good chance that the FC and or network/cards will also bind their > > interrupts to core0 & core 1 (check /proc/interrupts) on the same socket. > > From my poor laptop (1s, 4c): > > 42:1487104 9329 4042 3598 IR-PCI-MSI 512000-edge > > :00:1f.2 > > > > (my SATA controller) > > > > 43: 14664923 34 18 13 IR-PCI-MSI 327680-edge > > xhci_hcd > > (my dock station connector) > > > > 45:6754579 4437 2501 2419 IR-PCI-MSI 32768-edge > > i915 > > (GPU) > > > > 47: 187409 11627 1235 1259 IR-PCI-MSI 2097152-edge > > iwlwifi > > (NIC, wifi) > > Interesting, here an example from a 8 cores machine running my vms: > > [nsoffer@jumbo ~]$ cat /proc/interrupts >CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 > CPU6 CPU7 > 0: 31 0 0 0 0 0 > 0 0 IR-IO-APIC-edge timer > 1: 2 0 0 1 0 0 > 0 0 IR-IO-APIC-edge i8042 > 8: 0 0 0 0 0 0 > 0 1 IR-IO-APIC-edge rtc0 > 9: 0 0 0 0 0 0 > 0 0 IR-IO-APIC-fasteoi acpi > 12: 3 0 0 0 0 0 > 1 0 IR-IO-APIC-edge i8042 > 16: 4 4 9 0 9 1 > 1 3 IR-IO-APIC 16-fasteoi ehci_hcd:usb3 > 23: 13 1 5 0 12 1 > 1 0 IR-IO-APIC 23-fasteoi ehci_hcd:usb4 > 24: 0 0 0 0 0 0 > 0 0 DMAR_MSI-edge dmar0 > 25: 0 0 0 0 0 0 > 0 0 DMAR_MSI-edge dmar1 > 26: 36703542159062370491124 > 169 54 IR-PCI-MSI-edge :00:1f.2 > 27: 0 0 0 0 0 0 > 0 0 IR-PCI-MSI-edge xhci_hcd > 28: 166285414 0 3 0 4 0 > 0 0 IR-PCI-MSI-edge em1 > 29: 18 0 0 0 4 3 > 0 0 IR-PCI-MSI-edge mei_me > 30: 1151 17 0 3169 > 26 94
Re: [ovirt-devel] Improving our github presense
I don't think that's a good idea, we are already maintaining such a list in the gerrit groups, doing so also on github would duplicate the effort to maintain such a list in sync and there's no real usage of it anywhere. On 11/29 10:15, Barak Korren wrote: > Now that we have members, we could also create teams to provide some > transparency to who does what in oVirt. > > I've created a team of oVirt infra: > https://github.com/orgs/oVirt/teams/ovirt-infra and added the members > I could find. I suggest others follow suit and create their own teams. > > On 21 November 2015 at 01:40, Nir Sofferwrote: > > We have now 39 members - but only 13 are public. > > > > To make yourself public, visit > > https://github.com/orgs/oVirt/people > > ___ > > Devel mailing list > > de...@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/devel > > > > -- > Barak Korren > bkor...@redhat.com > RHEV-CI Team -- David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R Tel.: +420 532 294 605 Email: dc...@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 signature.asc Description: PGP signature ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Missing package on one of the Fedora 23 slaves?
Hey guys, It seems that one package is missing on this slave, if I remember correctly it happened in the past in one of the other slaves, can you check? http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/78/console ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Re: Missing package on one of the Fedora 23 slaves?
Hello Tal. As I see this is mock build, so it uses chroot and install the packages based on specs and what is available in the repos. So most likely there is something wrong in that part, will have a more detailed look into this. Anton. On Mon, Nov 30, 2015 at 5:27 PM, Tal Nisanwrote: > Hey guys, > It seems that one package is missing on this slave, if I remember > correctly it happened in the past in one of the other slaves, can you check? > > http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/78/console > > > ___ > Infra mailing list > Infra@ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra > > -- Anton Marchukov Senior Software Engineer - RHEV CI - Red Hat ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Re: Missing package on one of the Fedora 23 slaves?
Another instance: 17:37:47 Error: nothing provides ovirt-vmconsole >= 1.0.0-0 needed by vdsm-4.17.11-9.gitdcd50d8.fc23.noarch. 17:37:47 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by vdsm-4.17.11-9.gitdcd50d8.fc23.noarch. http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/80/console Nir On Mon, Nov 30, 2015 at 6:33 PM, Anton Marchukovwrote: > Hello Tal. > > As I see this is mock build, so it uses chroot and install the packages > based on specs and what is available in the repos. So most likely there is > something wrong in that part, will have a more detailed look into this. > > Anton. > > On Mon, Nov 30, 2015 at 5:27 PM, Tal Nisan wrote: >> >> Hey guys, >> It seems that one package is missing on this slave, if I remember >> correctly it happened in the past in one of the other slaves, can you check? >> >> http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/78/console >> >> >> ___ >> Infra mailing list >> Infra@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/infra >> > > > > -- > Anton Marchukov > Senior Software Engineer - RHEV CI - Red Hat > > > ___ > Infra mailing list > Infra@ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra > ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Re: Missing package on one of the Fedora 23 slaves?
On 11/30 20:09, Nir Soffer wrote: > Another instance: > > 17:37:47 Error: nothing provides ovirt-vmconsole >= 1.0.0-0 needed by > vdsm-4.17.11-9.gitdcd50d8.fc23.noarch. > 17:37:47 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by > vdsm-4.17.11-9.gitdcd50d8.fc23.noarch. Has ovirt-vmconsole dependency changed lately? From which repo should it be comming from? The repos available during the build are declared in the automation/check-patch.repos file in the vdsm git repo, maybe something is missing there for fc23? > > http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/80/console > > Nir > > On Mon, Nov 30, 2015 at 6:33 PM, Anton Marchukovwrote: > > Hello Tal. > > > > As I see this is mock build, so it uses chroot and install the packages > > based on specs and what is available in the repos. So most likely there is > > something wrong in that part, will have a more detailed look into this. > > > > Anton. > > > > On Mon, Nov 30, 2015 at 5:27 PM, Tal Nisan wrote: > >> > >> Hey guys, > >> It seems that one package is missing on this slave, if I remember > >> correctly it happened in the past in one of the other slaves, can you > >> check? > >> > >> http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/78/console > >> > >> > >> ___ > >> Infra mailing list > >> Infra@ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/infra > >> > > > > > > > > -- > > Anton Marchukov > > Senior Software Engineer - RHEV CI - Red Hat > > > > > > ___ > > Infra mailing list > > Infra@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/infra > > > ___ > Infra mailing list > Infra@ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra -- David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R Tel.: +420 532 294 605 Email: dc...@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 signature.asc Description: PGP signature ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Re: Missing package on one of the Fedora 23 slaves?
Adding Francesco On Mon, Nov 30, 2015 at 8:12 PM, David Carowrote: > On 11/30 20:09, Nir Soffer wrote: >> Another instance: >> >> 17:37:47 Error: nothing provides ovirt-vmconsole >= 1.0.0-0 needed by >> vdsm-4.17.11-9.gitdcd50d8.fc23.noarch. >> 17:37:47 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by >> vdsm-4.17.11-9.gitdcd50d8.fc23.noarch. > > Has ovirt-vmconsole dependency changed lately? From which repo should it be > comming from? > > The repos available during the build are declared in the > automation/check-patch.repos file in the vdsm git repo, maybe something is > missing there for fc23? > > >> >> http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/80/console >> >> Nir >> >> On Mon, Nov 30, 2015 at 6:33 PM, Anton Marchukov wrote: >> > Hello Tal. >> > >> > As I see this is mock build, so it uses chroot and install the packages >> > based on specs and what is available in the repos. So most likely there is >> > something wrong in that part, will have a more detailed look into this. >> > >> > Anton. >> > >> > On Mon, Nov 30, 2015 at 5:27 PM, Tal Nisan wrote: >> >> >> >> Hey guys, >> >> It seems that one package is missing on this slave, if I remember >> >> correctly it happened in the past in one of the other slaves, can you >> >> check? >> >> >> >> http://jenkins.ovirt.org/job/vdsm_3.6_check-patch-fc23-x86_64/78/console >> >> >> >> >> >> ___ >> >> Infra mailing list >> >> Infra@ovirt.org >> >> http://lists.ovirt.org/mailman/listinfo/infra >> >> >> > >> > >> > >> > -- >> > Anton Marchukov >> > Senior Software Engineer - RHEV CI - Red Hat >> > >> > >> > ___ >> > Infra mailing list >> > Infra@ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/infra >> > >> ___ >> Infra mailing list >> Infra@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/infra > > -- > David Caro > > Red Hat S.L. > Continuous Integration Engineer - EMEA ENG Virtualization R > > Tel.: +420 532 294 605 > Email: dc...@redhat.com > IRC: dcaro|dcaroest@{freenode|oftc|redhat} > Web: www.redhat.com > RHT Global #: 82-62605 ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 538 - Still Failing!
Project: http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ Build: http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/538/ Build Number: 538 Build Status: Still Failing Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/49403 - Changes Since Last Success: - Changes for Build #537 [Yedidyah Bar David] packaging: setup: pki: Do not fail if pkcs12 unreadable Changes for Build #538 [Moti Asayag] core: Ignore network exceptions during maintenance - Failed Tests: - No tests ran. ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.5_el6_merged - Build # 548 - Failure!
Project: http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/ Build: http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/548/ Build Number: 548 Build Status: Failure Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/49408 - Changes Since Last Success: - Changes for Build #548 [Yedidyah Bar David] packaging: setup: pki: Do not fail if pkcs12 unreadable - Failed Tests: - No tests ran. ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
[oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.6_el6_merged - Build # 537 - Failure!
Project: http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/ Build: http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.6_el6_merged/537/ Build Number: 537 Build Status: Failure Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/49408 - Changes Since Last Success: - Changes for Build #537 [Yedidyah Bar David] packaging: setup: pki: Do not fail if pkcs12 unreadable - Failed Tests: - No tests ran. ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Re: [RFC] Proposal for dropping FC22 jenkins tests on master branch
On Thu, Nov 12, 2015 at 9:34 AM, Sandro Bonazzolawrote: > Hi, > can we drop FC22 testing in jenkins now that FC23 jobs are up and running? > it will reduce jenkins load. If needed we can keep FC22 builds, just > dropping the check jobs. > Comments? > > This morning queue is up to 233 jobs, can we drop fc22 build on master? > -- > Sandro Bonazzola > Better technology. Faster innovation. Powered by community collaboration. > See how it works at redhat.com > -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Logwatch for linode01.ovirt.org (Linux)
### Logwatch 7.3.6 (05/19/07) Processing Initiated: Mon Nov 30 03:35:09 2015 Date Range Processed: yesterday ( 2015-Nov-29 ) Period is day. Detail Level of Output: 0 Type of Output: unformatted Logfiles for Host: linode01.ovirt.org ## - Dovecot Begin Dovecot disconnects: Logged out: 1435 Time(s) -- Dovecot End - - httpd Begin A total of 1 sites probed the server 176.9.150.110 Requests with error response codes 404 Not Found /: 254 Time(s) //admin/categories.php/login.php?cPath= ... product_preview: 3 Time(s) /__: 1 Time(s) /admin.php: 10 Time(s) /admin/: 10 Time(s) /admin/board: 3 Time(s) /admin/login.php: 10 Time(s) /administrator/index.php: 11 Time(s) /bitrix/admin/index.php?lang=en: 10 Time(s) /blog/: 1 Time(s) /blog/robots.txt: 1 Time(s) /blog/wp-admin/: 9 Time(s) /board: 6 Time(s) /category/news/feed: 1 Time(s) /category/news/feed/: 25 Time(s) /cgi-bin/webproc: 1 Time(s) /data/mail/css.php?key=90sec:: 1 Time(s) /favicon.ico: 276 Time(s) /icons/administrator/index.php: 1 Time(s) /icons/wp-login.php: 1 Time(s) /index.php?gf_page=upload: 1 Time(s) /index.php?option=com_adsmanager=upload=component: 1 Time(s) /listinfo/board: 3 Time(s) /mailman/administrator/index.php: 1 Time(s) /mailman/list: 1 Time(s) /mailman/wp-login.php: 1 Time(s) /news-and-events/workshop-1-to-3-november-2011/: 1 Time(s) /old/wp-admin/: 12 Time(s) /pipermail/2015-May/author.html: 1 Time(s) /pipermail/commits: 1 Time(s) /pipermail/devel/2012-january/000483.html: 1 Time(s) /pipermail/engine-commits/2012-august.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-august/author.html: 1 Time(s) /pipermail/engine-commits/2012-august/date.html: 1 Time(s) /pipermail/engine-commits/2012-august/subject.html: 1 Time(s) /pipermail/engine-commits/2012-august/thread.html: 1 Time(s) /pipermail/engine-commits/2012-december.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-december/author.html: 1 Time(s) /pipermail/engine-commits/2012-december/date.html: 1 Time(s) /pipermail/engine-commits/2012-december/subject.html: 1 Time(s) /pipermail/engine-commits/2012-december/thread.html: 1 Time(s) /pipermail/engine-commits/2012-july.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-july/author.html: 1 Time(s) /pipermail/engine-commits/2012-july/date.html: 1 Time(s) /pipermail/engine-commits/2012-july/subject.html: 1 Time(s) /pipermail/engine-commits/2012-july/thread.html: 1 Time(s) /pipermail/engine-commits/2012-june.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-june/author.html: 1 Time(s) /pipermail/engine-commits/2012-june/date.html: 1 Time(s) /pipermail/engine-commits/2012-june/subject.html: 1 Time(s) /pipermail/engine-commits/2012-june/thread.html: 1 Time(s) /pipermail/engine-commits/2012-may.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-may/author.html: 1 Time(s) /pipermail/engine-commits/2012-may/date.html: 1 Time(s) /pipermail/engine-commits/2012-may/subject.html: 1 Time(s) /pipermail/engine-commits/2012-may/thread.html: 1 Time(s) /pipermail/engine-commits/2012-november.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-november/author.html: 1 Time(s) /pipermail/engine-commits/2012-november/date.html: 1 Time(s) /pipermail/engine-commits/2012-november/subject.html: 1 Time(s) /pipermail/engine-commits/2012-november/thread.html: 1 Time(s) /pipermail/engine-commits/2012-october.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-october/author.html: 1 Time(s) /pipermail/engine-commits/2012-october/date.html: 1 Time(s) /pipermail/engine-commits/2012-october/subject.html: 1 Time(s) /pipermail/engine-commits/2012-october/thread.html: 1 Time(s) /pipermail/engine-commits/2012-september.txt.gz: 1 Time(s) /pipermail/engine-commits/2012-september/author.html: 1 Time(s) /pipermail/engine-commits/2012-september/date.html: 1 Time(s) /pipermail/engine-commits/2012-september/subject.html: 1 Time(s) /pipermail/engine-commits/2012-september/thread.html: 1 Time(s) /pipermail/engine-commits/2013-april.txt.gz: 1 Time(s) /pipermail/engine-commits/2013-april/author.html: 1 Time(s) /pipermail/engine-commits/2013-april/date.html: 1 Time(s)
Re: [ovirt-devel] [vdsm] strange network test failure on FC23
- Original Message - > From: "Michal Skrivanek"> To: "Nir Soffer" , "Francesco Romani" > Cc: "Yaniv Kaul" , "infra" , "devel" > > Sent: Monday, November 30, 2015 9:52:59 AM > Subject: Re: [ovirt-devel] [vdsm] strange network test failure on FC23 > > > > On 29 Nov 2015, at 17:34, Nir Soffer wrote: > > > > On Sun, Nov 29, 2015 at 6:01 PM, Yaniv Kaul wrote: > > > On Sun, Nov 29, 2015 at 5:37 PM, Nir Soffer wrote: > > >> > > >> On Sun, Nov 29, 2015 at 10:37 AM, Yaniv Kaul wrote: > > >> > > > >> > On Fri, Nov 27, 2015 at 6:55 PM, Francesco Romani > > >> > wrote: > > >> >> > > >> >> Using taskset, the ip command now takes a little longer to complete. > > I fail to find the original reference for this. > Why does it take longer? is it purely the additional taskset executable > invocation? On busy system we do have these issues all the time, with lvm, > etc…so I don’t think it’s significant Yep, that's only the overhead of taskset executable. > > >> > Since we always use the same set of CPUs, I assume using a mask (for 0 > > >> > & > > >> > 1, > > >> > just use 0x3, as the man suggests) might be a tiny of a fraction > > >> > faster > > >> > to > > >> > execute taskset with, instead of the need to translate the numeric CPU > > >> > list. > > >> > > >> Creating the string "0-" is one line in vdsm. The code > > >> handling this in > > >> taskset is written in C, so the parsing time is practically zero. Even > > >> if it was non-zero, > > >> this code run once when we run a child process, so the cost is > > >> insignificant. > > > > > > > > > I think it's easier to just to have it as a mask in a config item > > > somewhere, > > > without need to create it or parse it anywhere. > > > For us and for the user. > > > > We have this option in /etc/vdsm/vdsm.conf: > > > > # Comma separated whitelist of CPU cores on which VDSM is allowed to > > # run. The default is "", meaning VDSM can be scheduled by the OS to > > # run on any core. Valid examples: "1", "0,1", "0,2,3" > > # cpu_affinity = 1 > > > > I think this is the easiest option for users. > > +1 +1, modulo the changes we need to fix https://bugzilla.redhat.com/show_bug.cgi?id=1286462 (patch is coming) > > > I assume those are cores, which probably in a multi-socket will be in the > > > first socket only. > > > There's a good chance that the FC and or network/cards will also bind > > > their > > > interrupts to core0 & core 1 (check /proc/interrupts) on the same socket. > > > From my poor laptop (1s, 4c): Yes, especially core0 (since 0 is nice defaults). This was the rationale behind the choice of cpu #1 in the first place. > > It seems that our default (CPU1) is fine. > > I think it’s safe enough. > Numbers above (and I checked the same on ppc with similar pattern) are for a > reasonablt epty system. We can get a different picture when vdsm is busy. In > general I think it’s indeed best to use the second online CPU for vdsm and > all CPUs for child processes Agreed - except for cases like bz1286462 - but let's discuss this on gerrit/bz > regarding exposing to users in UI - I think that’s way too low level. > vdsm.conf is good enough Agreed. This is one thing that "just works". Bests, -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Re: [ovirt-devel] Improving our github presense
On 11/30 20:46, John Hunter wrote: > Definitely this is a good idea, lots of people hang out on github, this will > attract more contributor :) You mean having the github org with people added or having to maintain teams also on github? Imo having people added to the org on github is enough visibility, adding teams there gives no extra visibility > > Cheers, > Zhao > > On Mon, Nov 30, 2015 at 6:08 PM, David Carowrote: > > > > > I don't think that's a good idea, we are already maintaining such a list > > in the > > gerrit groups, doing so also on github would duplicate the effort to > > maintain > > such a list in sync and there's no real usage of it anywhere. > > > > On 11/29 10:15, Barak Korren wrote: > > > Now that we have members, we could also create teams to provide some > > > transparency to who does what in oVirt. > > > > > > I've created a team of oVirt infra: > > > https://github.com/orgs/oVirt/teams/ovirt-infra and added the members > > > I could find. I suggest others follow suit and create their own teams. > > > > > > On 21 November 2015 at 01:40, Nir Soffer wrote: > > > > We have now 39 members - but only 13 are public. > > > > > > > > To make yourself public, visit > > > > https://github.com/orgs/oVirt/people > > > > ___ > > > > Devel mailing list > > > > de...@ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/devel > > > > > > > > > > > > -- > > > Barak Korren > > > bkor...@redhat.com > > > RHEV-CI Team > > > > -- > > David Caro > > > > Red Hat S.L. > > Continuous Integration Engineer - EMEA ENG Virtualization R > > > > Tel.: +420 532 294 605 > > Email: dc...@redhat.com > > IRC: dcaro|dcaroest@{freenode|oftc|redhat} > > Web: www.redhat.com > > RHT Global #: 82-62605 > > > > ___ > > Devel mailing list > > de...@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/devel > > > > > > -- > Best regards > Junwang Zhao > Department of Computer Science > Peking University > Beijing, 100871, PRC -- David Caro Red Hat S.L. Continuous Integration Engineer - EMEA ENG Virtualization R Tel.: +420 532 294 605 Email: dc...@redhat.com IRC: dcaro|dcaroest@{freenode|oftc|redhat} Web: www.redhat.com RHT Global #: 82-62605 signature.asc Description: PGP signature ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
Re: [RFC] Proposal for dropping FC22 jenkins tests on master branch
On Mon, Nov 30, 2015 at 01:11:25PM +0100, Sandro Bonazzola wrote: > On Thu, Nov 12, 2015 at 9:34 AM, Sandro Bonazzola> wrote: > > > Hi, > > can we drop FC22 testing in jenkins now that FC23 jobs are up and running? > > it will reduce jenkins load. If needed we can keep FC22 builds, just > > dropping the check jobs. > > Comments? > > > > > This morning queue is up to 233 jobs, can we drop fc22 build on master? +1. http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/ and http://jenkins.ovirt.org/view/All/job/vdsm_master_install-rpm-sanity-fc23_created/ seem good. Dan. ___ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra