Re: [ovirt-devel] XML benchmarks

2014-06-29 Thread Nir Soffer
- Original Message -
 From: Francesco Romani from...@redhat.com
 To: devel@ovirt.org
 Sent: Friday, June 27, 2014 3:30:14 PM
 Subject: [ovirt-devel] XML benchmarks
 
 Hi,
 
 Due to the recent discussion (http://gerrit.ovirt.org/#/c/28712/), and as
 part
 of the ongoing focus on scalability and performances
 (http://gerrit.ovirt.org/#/c/17694/ and many others),
 
 I took the chance to do a very quick and dirty bench to see how it really
 cost
 to do XML processing in sampling threads (thanks to Nir for the kickstart!),
 and,
 in general, how much the XML processing costs.
 
 Please find attached the test script and the example XML
 (real one made by VDSM master on my RHEL6.5 box).
 
 On my laptop:
 
 $ lscpu
 Architecture:  x86_64
 CPU op-mode(s):32-bit, 64-bit
 Byte Order:Little Endian
 CPU(s):4
 On-line CPU(s) list:   0-3
 Thread(s) per core:2
 Core(s) per socket:2
 Socket(s): 1
 NUMA node(s):  1
 Vendor ID: GenuineIntel
 CPU family:6
 Model: 58
 Model name:Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
 Stepping:  9
 CPU MHz:   1359.375
 CPU max MHz:   3600.
 CPU min MHz:   1200.
 BogoMIPS:  5786.91
 Virtualization:VT-x
 L1d cache: 32K
 L1i cache: 32K
 L2 cache:  256K
 L3 cache:  4096K
 NUMA node0 CPU(s): 0-3
 
 8 GiBs of RAM, running GNOME desktop and the usual development stuff
 
 xmlbench.py linuxvm1.xml MODE 300
 
 MODE is either 'md' (minidom) or 'cet' (cElementTree).
 This will run $NUMTHREADS threads fast and loose without synchronization.
 We can actually have this behaviour if a customer just mass start VMs.
 In general I expect some clustering of the sampling activity, not a nice
 evenly interleaved
 time sequence.
 
 CPU measurement: just opened a terminal and run 'htop' on it.
 CPU profile: clustered around the sampling interval. Usage negligible most of
 time, peak on sampling as shown below
 
 300 VMs
 minidom: ~38% CPU
 cElementTree: ~5% CPU

What is 38% - (38% of one core? how may cores are on the machine?)

 
 500 VMs
 minidom: ~48% CPU
 cElementTree: ~6% CPU
 
 1000 VMs
 python thread error :)
 
   File /usr/lib64/python2.7/threading.py, line 746, in start
 _start_new_thread(self.__bootstrap, ())
 thread.error: can't start new thread
 
 
 I think this is another proof (if we need more of them) that
 * we _really need_ to move away from the 1 thread per VM model -
 http://gerrit.ovirt.org/#/c/29189/ and friends! Let's fire up the
 discussion!
 * we should move to cElementTree anyway in the near future: faster
 processing, scales better, nicer API.
   It is also a pet peeve of mine, I do have some patches floating but we need
   still some preparation work in the virt package.

Seeing this load created by parsing libvirt xml every 15 seconds, I think
we should consider decreasing the sample rate suggested in
http://gerrit.ovirt.org/28712 Or collecting the data in another way.

Nir
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Failed to run make rpm on VDSM latest master

2014-06-29 Thread Saggi Mizrahi
We have ioprocess in EPEL and brew builds for el6\7, and builds for f19\f20
in bodhi.

There is no such thing as pushing to stable. In order to be deemed stable
a package has to sit there for 30 days without negative karma or get enough (3)
positive karma.

We have notified on the mailing list that ioprocess is in need for
karma. In any case, you should start getting used to fetching ioprocess
builds from updates-testing when working on the master branch as
we are not going to wait for a month before adding capabilities to VDSM
after adding a feature to ioprocess (or any other external dependency).

There is no reason for VDSM's master branch to only use RPMs from stable
repos. Quite the contrary, the reason that updates-testing exists is so
that consumers of the RPM can test the impending release and make sure it
works before it is delivered to the general public.

Further more, after using a build from updates-testing with VDSM and seeing
that it works it would be most appreciated if you supply give karma
to accelerate the process of getting a released to the stable repo.

For more information:
https://fedoraproject.org/wiki/QA:Updates_Testing

Links to builds:
https://brewweb.devel.redhat.com/packageinfo?packageID=47301
https://admin.fedoraproject.org/updates/search/ioprocess?_csrf_token=adc891f4d62106b08829fe32def78b296e168eed

 
 On Thu, Jun 26, 2014 at 01:38:45PM -0400, Nir Soffer wrote:
  - Original Message -
   From: Dan Kenigsberg dan...@redhat.com
   To: Nir Soffer nsof...@redhat.com
   Cc: ybronhei ybron...@redhat.com, Oved Ourfali ov...@redhat.com,
   Eli Mesika emes...@redhat.com, Yeela
   Kaplan ykap...@redhat.com, Saggi Mizrahi smizr...@redhat.com
   Sent: Thursday, June 26, 2014 8:11:59 PM
   Subject: Re: Failed to run make rpm on VDSM latest master
   
   On Thu, Jun 26, 2014 at 11:59:13AM -0400, Nir Soffer wrote:
- Original Message -
 From: ybronhei ybron...@redhat.com
 To: Oved Ourfali ov...@redhat.com, Eli Mesika
 emes...@redhat.com
 Cc: Nir Soffer nsof...@redhat.com, Yeela Kaplan
 ykap...@redhat.com, Saggi Mizrahi smizr...@redhat.com
 Sent: Thursday, June 26, 2014 2:22:16 PM
 Subject: Re: Failed to run make rpm on VDSM latest master
 
 On 06/26/2014 07:53 AM, Oved Ourfali wrote:
  cc-ing also Yeela and Saggi.
 
  Oved
 
  - Original Message -
  From: Eli Mesika emes...@redhat.com
  To: Yaniv Bronheim ybron...@redhat.com, Nir Soffer
  nsof...@redhat.com
  Cc: Oved Ourfalli oourf...@redhat.com
  Sent: Wednesday, June 25, 2014 11:34:24 PM
  Subject: Failed to run make rpm on VDSM latest master
 
 
  Hi, I had applied Nir patch and tried to create rpms , and this is
  what I
  have got :
 
  error: Failed build dependencies:
   python-ioprocess = 0.5-1 is needed by
   vdsm-4.15.0-185.git5b501bf.el6.x86_64
 
 
  Then I had switched to the master branch and got the same
 
  How can I install this new dependency ?
 
 
  Thanks
  Eli Mesika
 
 
 solved?
 just yum install python-ioprocess
 i will update vdsm_Developers wiki page

# yum install python-ioprocess
No package python-ioprocess available.

Looks like broken (again) master to me
   
   # yum --enablerepo=epel-testing install python-ioprocess
   
   Fixed this for me. Saggi/Yeela, could you push it to stable, and produce
   a
   f19 build? That would help plenty of us.
  
  This patch should fix the situation for everyone, allowing easy testing.
  http://gerrit.ovirt.org/29304
  
  Nir
 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Failed to run make rpm on VDSM latest master

2014-06-29 Thread Saggi Mizrahi


- Original Message -
 From: Nir Soffer nsof...@redhat.com
 To: Saggi Mizrahi smizr...@redhat.com
 Cc: devel@ovirt.org, Yeela Kaplan ykap...@redhat.com, bazu...@redhat.com, 
 Dan Kenigsberg dan...@redhat.com
 Sent: Sunday, June 29, 2014 2:48:49 PM
 Subject: Re: Failed to run make rpm on VDSM latest master
 
 - Original Message -
  From: Saggi Mizrahi smizr...@redhat.com
  To: Nir Soffer nsof...@redhat.com, devel@ovirt.org
  Cc: Yeela Kaplan ykap...@redhat.com, bazu...@redhat.com, Dan
  Kenigsberg dan...@redhat.com
  Sent: Sunday, June 29, 2014 12:29:35 PM
  Subject: Re: Failed to run make rpm on VDSM latest master
  
  We have ioprocess in EPEL and brew builds for el6\7, and builds for f19\f20
  in bodhi.
 
 Which repository provides this packages on RHEL 6.5?
I linked the brew\bodhi pages which have that information.
 
  We have notified on the mailing list that ioprocess is in need for
  karma. In any case, you should start getting used to fetching ioprocess
  builds from updates-testing when working on the master branch as
  we are not going to wait for a month before adding capabilities to VDSM
  after adding a feature to ioprocess (or any other external dependency).
 
 I checked and it is available now from the updates-testing repo, but it
 was not available last week when the requirements was merged.
It sometimes takes time for the fedora mirrors to sync. Nothing I
can do about it. In general, you can always search bodhi\brew if
yum can't find it. You know there will always be a build since we can't
pass CI if we can't supply a koji\EPEL build to the CI team to install
on the slaves.
 
   On Thu, Jun 26, 2014 at 01:38:45PM -0400, Nir Soffer wrote:
- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Nir Soffer nsof...@redhat.com
 Cc: ybronhei ybron...@redhat.com, Oved Ourfali
 ov...@redhat.com,
 Eli Mesika emes...@redhat.com, Yeela
 Kaplan ykap...@redhat.com, Saggi Mizrahi smizr...@redhat.com
 Sent: Thursday, June 26, 2014 8:11:59 PM
 Subject: Re: Failed to run make rpm on VDSM latest master
 
 On Thu, Jun 26, 2014 at 11:59:13AM -0400, Nir Soffer wrote:
  - Original Message -
   From: ybronhei ybron...@redhat.com
   To: Oved Ourfali ov...@redhat.com, Eli Mesika
   emes...@redhat.com
   Cc: Nir Soffer nsof...@redhat.com, Yeela Kaplan
   ykap...@redhat.com, Saggi Mizrahi smizr...@redhat.com
   Sent: Thursday, June 26, 2014 2:22:16 PM
   Subject: Re: Failed to run make rpm on VDSM latest master
   
   On 06/26/2014 07:53 AM, Oved Ourfali wrote:
cc-ing also Yeela and Saggi.
   
Oved
   
- Original Message -
From: Eli Mesika emes...@redhat.com
To: Yaniv Bronheim ybron...@redhat.com, Nir Soffer
nsof...@redhat.com
Cc: Oved Ourfalli oourf...@redhat.com
Sent: Wednesday, June 25, 2014 11:34:24 PM
Subject: Failed to run make rpm on VDSM latest master
   
   
Hi, I had applied Nir patch and tried to create rpms , and
this
is
what I
have got :
   
error: Failed build dependencies:
 python-ioprocess = 0.5-1 is needed by
 vdsm-4.15.0-185.git5b501bf.el6.x86_64
   
   
Then I had switched to the master branch and got the same
   
How can I install this new dependency ?
   
   
Thanks
Eli Mesika
   
   
   solved?
   just yum install python-ioprocess
   i will update vdsm_Developers wiki page
  
  # yum install python-ioprocess
  No package python-ioprocess available.
  
  Looks like broken (again) master to me
 
 # yum --enablerepo=epel-testing install python-ioprocess
 
 Fixed this for me. Saggi/Yeela, could you push it to stable, and
 produce
 a
 f19 build? That would help plenty of us.

This patch should fix the situation for everyone, allowing easy
testing.
http://gerrit.ovirt.org/29304

Nir
   
  
 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] XML benchmarks

2014-06-29 Thread Saggi Mizrahi
It's good to see us moving away from minidom.
I do think there is a place though to abstract out
common use cases so we are not tied to an API and that
we do the optimal thing more most use cases.

- Original Message -
 From: Francesco Romani from...@redhat.com
 To: devel@ovirt.org
 Sent: Friday, June 27, 2014 3:30:14 PM
 Subject: [ovirt-devel] XML benchmarks
 
 Hi,
 
 Due to the recent discussion (http://gerrit.ovirt.org/#/c/28712/), and as
 part
 of the ongoing focus on scalability and performances
 (http://gerrit.ovirt.org/#/c/17694/ and many others),
 
 I took the chance to do a very quick and dirty bench to see how it really
 cost
 to do XML processing in sampling threads (thanks to Nir for the kickstart!),
 and,
 in general, how much the XML processing costs.
 
 Please find attached the test script and the example XML
 (real one made by VDSM master on my RHEL6.5 box).
 
 On my laptop:
 
 $ lscpu
 Architecture:  x86_64
 CPU op-mode(s):32-bit, 64-bit
 Byte Order:Little Endian
 CPU(s):4
 On-line CPU(s) list:   0-3
 Thread(s) per core:2
 Core(s) per socket:2
 Socket(s): 1
 NUMA node(s):  1
 Vendor ID: GenuineIntel
 CPU family:6
 Model: 58
 Model name:Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
 Stepping:  9
 CPU MHz:   1359.375
 CPU max MHz:   3600.
 CPU min MHz:   1200.
 BogoMIPS:  5786.91
 Virtualization:VT-x
 L1d cache: 32K
 L1i cache: 32K
 L2 cache:  256K
 L3 cache:  4096K
 NUMA node0 CPU(s): 0-3
 
 8 GiBs of RAM, running GNOME desktop and the usual development stuff
 
 xmlbench.py linuxvm1.xml MODE 300
 
 MODE is either 'md' (minidom) or 'cet' (cElementTree).
 This will run $NUMTHREADS threads fast and loose without synchronization.
 We can actually have this behaviour if a customer just mass start VMs.
 In general I expect some clustering of the sampling activity, not a nice
 evenly interleaved
 time sequence.
 
 CPU measurement: just opened a terminal and run 'htop' on it.
 CPU profile: clustered around the sampling interval. Usage negligible most of
 time, peak on sampling as shown below
 
 300 VMs
 minidom: ~38% CPU
 cElementTree: ~5% CPU
 
 500 VMs
 minidom: ~48% CPU
 cElementTree: ~6% CPU
 
 1000 VMs
 python thread error :)
 
   File /usr/lib64/python2.7/threading.py, line 746, in start
 _start_new_thread(self.__bootstrap, ())
 thread.error: can't start new thread
 
 
 I think this is another proof (if we need more of them) that
 * we _really need_ to move away from the 1 thread per VM model -
 http://gerrit.ovirt.org/#/c/29189/ and friends! Let's fire up the
 discussion!
 * we should move to cElementTree anyway in the near future: faster
 processing, scales better, nicer API.
   It is also a pet peeve of mine, I do have some patches floating but we need
   still some preparation work in the virt package.
 
 
 --
 Francesco Romani
 RedHat Engineering Virtualization R  D
 Phone: 8261328
 IRC: fromani
 
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [vdsm] Infrastructure design for node (host) devices

2014-06-29 Thread Saggi Mizrahi


- Original Message -
 From: Martin Polednik mpole...@redhat.com
 To: devel@ovirt.org
 Sent: Tuesday, June 24, 2014 1:26:17 PM
 Subject: [ovirt-devel] [vdsm] Infrastructure design for node (host) devices
 
 Hello,
 
 I'm actively working on getting host device passthrough (pci, usb and scsi)
 exposed in VDSM, but I've encountered growing complexity of this feature.
 
 The devices are currently created in the same manner as virtual devices and
 their reporting is done via hostDevices list in getCaps. As I implemented
 usb and scsi devices, the size of this list grew almost twice - and that is
 on a laptop.
There should be a separate verb with ability to filter by type.
 
 Similar problem is with the devices themselves, they are closely tied to host
 and currently, engine would have to keep their mapping to VMs, reattach back
 loose devices and handle all of this in case of migration.
Migration sound very complicated, especially at the phase where the VM actually
starts running on the target host. The hardware state is completely different
but the guest OS wouldn't have any idea that happened.
So detaching before migration and than reattaching on the destination is a must
but that could cause issues in the guest. I'd imaging that this would be an 
issue
when hibernating on one host and waking up on another.
 
 I would like to hear your opinion on building something like host device pool
 in VDSM. The pool would be populated and periodically updated (to handle
 hot(un)plugs) and VMs/engine could query it for free/assigned/possibly
 problematic
 devices (which could be reattached by the pool). This has added benefit of
 requiring fewer libvirt calls, but a bit more complexity and possibly one
 thread.
 The persistence of the pool on VDSM restart could be kept in config or
 constructed
 from XML.
I'd much rather VDSM not cache state unless this is absolutely necessary.
This sounds like something that doesn't need to be queried every 3 seconds
so it's best if we just get to ask libvirt.

I do wonder how that kind of thing can be configured in the VM creation
phase as you would sometimes want to just specify a type of device and
sometimes specify a specific one. Also, I'd assume there will be a
fallback policy stating if the VM should run if said resource is unavailable.
 
 I'd need new API verbs to allow engine to communicate with the pool,
 possibly leaving caps as they are and engine could detect the presence of
 newer
 vdsm by presence of these API verbs.
Again, I think that getting a list of devices filterable by kind\type might
be best than a real pool. We might want to return if a device is in use
(could also be in use by the host operating system and not just VMs)
 The vmCreate call would remain almost
 the
 same, only with the addition of new device for VMs (where the detach and
 tracking
 routine would be communicated with the pool).
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Project vdsm-jsonrpc-java backup maintainer

2014-06-29 Thread Eli Mesika


- Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: infra in...@ovirt.org, devel@ovirt.org
 Cc: Piotr Kliczewski pklic...@redhat.com
 Sent: Friday, June 27, 2014 5:30:34 PM
 Subject: [ovirt-devel] Project vdsm-jsonrpc-java backup maintainer
 
 Hi,
 looks like only one person has +2 / merge rights on vdsm-jsonrpc-java.
 I think every project should have a backup maintainer.
 Since it seems pkliczewski is the only committer there, I would like to
 propose him as maintainer too.

+1

 
 
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Project vdsm-jsonrpc-java backup maintainer

2014-06-29 Thread Alon Bar-Lev


- Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: infra in...@ovirt.org, devel@ovirt.org
 Cc: Piotr Kliczewski pklic...@redhat.com
 Sent: Friday, June 27, 2014 5:30:34 PM
 Subject: Project vdsm-jsonrpc-java backup maintainer
 
 Hi,
 looks like only one person has +2 / merge rights on vdsm-jsonrpc-java.
 I think every project should have a backup maintainer.
 Since it seems pkliczewski is the only committer there, I would like to
 propose him as maintainer too.
 

My usual statement:

The term of Backup maintainer is wrong in my opinion.

1. There is nothing emergency in upstream project.

2. Downstream maintainer can always add patches on top of upstream release if 
there is downstream emergency, this is the reason of having downstream.

3. For small and medium projects there should be only one maintainer, to keep 
project consistent.

4. Large projects should be modular (components with own release cycle), so for 
each module single maintainer is assigned. vdsm-jsonrpc-java is a good example 
of taking a component out of the monolithic engine into a module with own 
release cycle and own maintainer.

5. If maintainer of component is unresponsive or is not productive, maintainer 
should be assigned to different person.

Regards,
Alon
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel