Re: [Openstack] Need help to find code which populates compute_nodes table in nova database

2015-09-21 Thread Sylvain Bauza



Le 21/09/2015 08:21, Mahendra Ladhe a écrit :

Hi,
Could someone please tell me which code (filename and/or function 
name) actually populates the

compute_nodes table under 'nova' database ?
I am unable to find it on my own.



https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L365-L402

HTH,
-Sylvain


Thank you
Mahendra



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Energy/Kwapi metering using ceilometer in OpenStack Icehouse

2014-10-14 Thread Sylvain Bauza


Le 14/10/2014 08:31, Vivek Varghese Cherian a écrit :




On Fri, Oct 10, 2014 at 6:38 PM, Sylvain Bauza <mailto:sba...@redhat.com>> wrote:




Le 10/10/2014 14:49, Vivek Varghese Cherian a écrit :



The answer is quite simple : the Kwapi version of pbr is really
old and Kwapi is not following the OpenStack requirements file [1]

So, I proposed a quick fix for updating pbr, the patch is there :
https://review.openstack.org/127218


If you want to try yourself, just clone the Kwapi repository and
apply this command at the Kwapi root :

git fetch https://review.openstack.org/stackforge/kwapi
refs/changes/18/127218/2 && git cherry-pick FETCH_HEAD


-Sylvain

[1]

https://github.com/openstack/requirements/blob/master/global-requirements.txt


Hi Sylvian,

Installed the patch and tested to install and I am getting the 
following error.


root@ice14:~/kwapi# ./setup.py install
running install
Downloading/unpacking pbr>=0.6,!=0.7,<1.0
  Cannot fetch index base URL https://pypi.python.org/simple/
  Could not find any downloads that satisfy the requirement 
pbr>=0.6,!=0.7,<1.0

Cleaning up...
No distributions at all found for pbr>=0.6,!=0.7,<1.0
Storing debug log for failure in /home/ppm/.pip/pip.log
error: ['/usr/bin/python', u'-m', u'pip.__init__', u'install', 
u'pbr>=0.6,!=0.7,<1.0', u'd2to1>=0.2.10,<0.3', u'eventlet', u'flask', 
u'iso8601', u'kombu', u'oslo.config', u'pyserial', u'pysnmp', 
u'python-keystoneclient', u'pyzmq', u'python-rrdtool', u'webob'] 
returned 1

root@ice14:~/kwapi#




First, my patch has been updated, the command is git fetch 
https://review.openstack.org/stackforge/kwapi refs/changes/18/127218/3 
&& git cherry-pick FETCH_HEAD



Then, please run 'pip install -r requirements.txt' and please don't run 
yet ./setup.py install


If you have troubles with downloading pbr using the command I mentioned 
above, then you could potentially have a proxy issue or a connection 
issue, but if not, that should install all the requirements.


Once you're done, just issue then './setup.py install', that should 
deploy the source files into the corresponding Python PATH.


-Sylvain


Thanks,
--
Vivek Varghese Cherian


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Energy/Kwapi metering using ceilometer in OpenStack Icehouse

2014-10-10 Thread Sylvain Bauza


Le 10/10/2014 14:49, Vivek Varghese Cherian a écrit :



On Fri, Oct 10, 2014 at 6:05 PM, Bruno Grazioli 
mailto:bruno.graz...@gmail.com>> wrote:


Hi,

To be honest this is very strange for me, this error seems to be
very broad and are many solution for it.
Having a quick look in the internet it seems that is related to
your pip package, which version of pip are you using?
Check out this link

[1],
there may have the answer for your question.

BR,
Bruno.


Hi,

The pip version is as follows,

root@ice14:~/kwapi# pip --version
pip 1.5.4 from /usr/lib/python2.7/dist-packages (python 2.7)
root@ice14:~/kwapi#




The answer is quite simple : the Kwapi version of pbr is really old and 
Kwapi is not following the OpenStack requirements file [1]


So, I proposed a quick fix for updating pbr, the patch is there :
https://review.openstack.org/127218


If you want to try yourself, just clone the Kwapi repository and apply 
this command at the Kwapi root :


git fetch https://review.openstack.org/stackforge/kwapi 
refs/changes/18/127218/2 && git cherry-pick FETCH_HEAD



-Sylvain

[1] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt


--
Vivek Varghese Cherian


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [energy] How to enable kwapi plugin in Ceilometer ?

2014-08-08 Thread Sylvain Bauza


Le 08/08/2014 11:14, Deepthi Dharwar a écrit :

Hi all,

I am running devstack with Ceilometer enabled. I am looking to gather
energy and power stats. I have installed kwapi plugin and am able to
retrieve Power numbers via the kwapi-driver.

I needed some help to know as to how to enable gathering of these power
stats in Ceilometer and what are the config changes needed to do on the
Ceilometer side for the same ?

Regards,
Deepthi


Redirecting to Francois Rossigneux who is the main contributor...

-Sylvain


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Custom Nova Scheduler Weigher

2014-08-01 Thread Sylvain Bauza


Le 01/08/2014 18:46, Danny Beutler a écrit :
I am in the process of implementing a custom weigher class. I have 
created a weigher that prefers hosts which do not have other instances 
in the same group (think GroupAntiAffinityFilter but for weight).


Here is the code for the class:

# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the "License"); 
you may
#not use this file except in compliance with the License. You may 
obtain

#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS, 
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
See the
#License for the specific language governing permissions and 
limitations


"""
AntiAffinityWeigher.  Weigh hosts by whether or not they have another 
host in the same group.


"""

from oslo.config import cfg

from nova import db
from nova.openstack.common.gettextutils import _
from nova.scheduler import weights
from nova.scheduler import filters
from nova.openstack.common import log as logging

LOG = logging.getLogger(__name__)

anti_affinity_weight_opts = [
cfg.FloatOpt('antiAffinityWeigher_Multiplier',
 default=1000.0,
 help='Multiplier used for weighing hosts.  Negative '
  'numbers mean to stack vs spread.'),
]

CONF = cfg.CONF
CONF.register_opts(anti_affinity_weight_opts)


class AntiAffinityWeigher(weights.BaseHostWeigher):
def _weight_multiplier(self):
"""Override the weight multiplier."""
return CONF.antiAffinityWeigher_Multiplier

def _weigh_object(self, host_state, weight_properties):
group_hosts = weight_properties.get('group_hosts') or []
LOG.debug(_("Group anti affinity Weigher: check if %(host)s not "
"in %(configured)s"), {'host': host_state.host,
'configured': group_hosts})
if group_hosts:
return group_hosts.amount() * 10

# No groups configured
return 0


I know the python is at least close to correct because the scheduler 
service wouldn't even restart until it was. After I got the bugs 
worked out of the module, I added modified the /etc/nova/nova.conf 
file to have the custom weigher like so:

scheduler_weight_classes=nova.scheduler.weights.all_weighers,nova.scheduler.AntiAffinityWeigher

After restarting the scheduler service I get the following error in 
the nova logs:
<178>Aug  1 16:46:11 node-25 nova-nova CRITICAL: Class 
AntiAffinityWeigher cannot be found (['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", 
line 31, in import_class\nreturn getattr(sys.modules[mod_str], 
class_str)\n', "AttributeError: 'module' object has no attribute 
'AntiAffinityWeigher'\n"])

Traceback (most recent call last):
  File "/usr/bin/nova-scheduler", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.6/site-packages/nova/cmd/scheduler.py", line 
39, in main

topic=CONF.scheduler_topic)
  File "/usr/lib/python2.6/site-packages/nova/service.py", line 257, 
in create

db_allowed=db_allowed)
  File "/usr/lib/python2.6/site-packages/nova/service.py", line 139, 
in __init__

self.manager = manager_class(host=self.host, *args, **kwargs)
  File "/usr/lib/python2.6/site-packages/nova/scheduler/manager.py", 
line 65, in __init__

self.driver = importutils.import_object(scheduler_driver)
  File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", 
line 40, in import_object

return import_class(import_str)(*args, **kwargs)
  File 
"/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py", 
line 59, in __init__

super(FilterScheduler, self).__init__(*args, **kwargs)
  File "/usr/lib/python2.6/site-packages/nova/scheduler/driver.py", 
line 103, in __init__

CONF.scheduler_host_manager)
  File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", 
line 40, in import_object

return import_class(import_str)(*args, **kwargs)
  File 
"/usr/lib/python2.6/site-packages/nova/scheduler/host_manager.py", 
line 297, in __init__

CONF.scheduler_weight_classes)
  File "/usr/lib/python2.6/site-packages/nova/loadables.py", line 105, 
in get_matching_classes

obj = importutils.import_class(cls_name)
  File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", 
line 35, in import_class

traceback.format_exception(*sys.exc_info(
ImportError: Class AntiAffinityWeigher cannot be found (['Traceback 
(most recent call last):\n', '  File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", 
line 31, in import_class\nreturn getattr(sys.modules[mod_str], 
class_str)\n', "AttributeError: 'module' object has no attribute 
'AntiAf

Re: [Openstack] Ironic release date?

2013-09-03 Thread Sylvain Bauza

Hi Jake,

Come in #openstack-ironic, I'll try to help you as much as I can. I 
tested the latest baremetal driver using devstack and real baremetal 
hosts (using IPMI), it works like a charm.



Btw, I'm planning to write a blogpost about how to devstack using 
baremetal driver. Hope it will help you.


-Sylvain

Le 03/09/2013 07:13, Jake G. a écrit :

Hi all,

I have been unable to get the nova baremetal driver to work to save my 
life,

so I was wondering when Ironic is expected to be released?

This year, next year sometime?

Thanks!


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live Migration with Gluster Storage

2013-08-20 Thread Sylvain Bauza
Ergh, for some reason the pointer was lost. Here it is : 
http://blog.flaper87.org/post/520b7ff00f06d37b7a766dc0/

[1] :
Le 20/08/2013 10:49, Sylvain Bauza a écrit :
Please note that there is huge improvement in terms of perfs if you 
choose to cherry-pick the libgfapi driver which has recently been 
implemented in Nova [1].


That would assume you use Cinder bootable volumes instead of classical 
QCOW2 instances, but the improvement is worth it.


-Sylvain


  Le 20/08/2013 09:29, Marco CONSONNI a écrit :

Hello Guilherme and all,

I was able to deploy live migration with gluster: I originally tried 
NFS like you did but I found problems.
On the contrary, Gluster works perfectly and it's quite easy to 
instal and configure.


This is what you need to do for a basic installation assuming that 
you have a node, working as a gluster server, with 2 disks and a set 
of compute nodes, working as gluster clients, that use the gluster 
shared directory for saving the running images.


-- On the gluster server --

1) Prepare the volumes

Assuming that you have two disks (/dev/sdb and /dev/sdc), create a 
primary partition on both of them using fdisk command.
Format the volumes with command: sudo mkfs.xfs -i size=512 /dev/sdb1 
and sudo mkfs.xfs -i size=512 /dev/sdc1
Prepare two directories for mounting the two volumes: sudo mkdir -p 
/export/brick1 and sudo mkdir -p /export/brick2
Configure /etc/fstab for mounting the volumes by adding the following 
lines:


/dev/sdb1/export/brick1xfsdefaults02

/dev/sdc1/export/brick2xfsdefaults02

Mount the two volumes with command  sudo mount --a

2) install and configure gluster server

sudo apt-get install glusterfs-server

sudo gluster volume create openstack stripe 2 server>:/export/brick1 :/export/brick2


sudo gluster volume start openstack

-- On the gluster clients / compute nodes  --

1) Install the gluster client with command sudo apt-get install 
glusterfs-client


2) in the /etc/fstab, configure a gluster filesystem with name 
/var/lib/nova/instances by adding the following line:


:/openstack /var/lib/nova/instances 
glusterfs defaults,_netdev 0 0


Note that if you have already have a /var/lib/nova/instances 
directory on the compute node, this fstab configuration simply 
'hides' that but the contents are still there.
This configuration is needed for forcing compute node to store 
instances onto the gluster shared directory.


Hope it helps,
Marco.



2013/8/7 Guilherme Russi <mailto:luisguilherme...@gmail.com>>


Hello guys,

 I've been trying to deploy live migration to my cloud using NFS
but without success, I'd like to know if somebody has tried live
migration with Gluster Storage, does it work? Any problem when
installing it? Following the documentation from its website is
easy to install?

The only thing that is left to my cloud works 100% is the live
migration.

Thank you all.

Guilherme.

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to :openstack@lists.openstack.org
Unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live Migration with Gluster Storage

2013-08-20 Thread Sylvain Bauza
Please note that there is huge improvement in terms of perfs if you 
choose to cherry-pick the libgfapi driver which has recently been 
implemented in Nova [1].


That would assume you use Cinder bootable volumes instead of classical 
QCOW2 instances, but the improvement is worth it.


-Sylvain

[1] : http://blog.flaper87.org/post/520b7ff00f06d37b7a766dc0/

  Le 20/08/2013 09:29, Marco CONSONNI a écrit :

Hello Guilherme and all,

I was able to deploy live migration with gluster: I originally tried 
NFS like you did but I found problems.
On the contrary, Gluster works perfectly and it's quite easy to instal 
and configure.


This is what you need to do for a basic installation assuming that you 
have a node, working as a gluster server, with 2 disks and a set of 
compute nodes, working as gluster clients, that use the gluster shared 
directory for saving the running images.


-- On the gluster server --

1) Prepare the volumes

Assuming that you have two disks (/dev/sdb and /dev/sdc), create a 
primary partition on both of them using fdisk command.
Format the volumes with command: sudo mkfs.xfs -i size=512 /dev/sdb1 
and sudo mkfs.xfs -i size=512 /dev/sdc1
Prepare two directories for mounting the two volumes: sudo mkdir -p 
/export/brick1 and sudo mkdir -p /export/brick2
Configure /etc/fstab for mounting the volumes by adding the following 
lines:


/dev/sdb1/export/brick1xfsdefaults02

/dev/sdc1/export/brick2xfsdefaults02

Mount the two volumes with command  sudo mount --a

2) install and configure gluster server

sudo apt-get install glusterfs-server

sudo gluster volume create openstack stripe 2 server>:/export/brick1 :/export/brick2


sudo gluster volume start openstack

-- On the gluster clients / compute nodes  --

1) Install the gluster client with command sudo apt-get install 
glusterfs-client


2) in the /etc/fstab, configure a gluster filesystem with name 
/var/lib/nova/instances by adding the following line:


:/openstack /var/lib/nova/instances 
glusterfs defaults,_netdev 0 0


Note that if you have already have a /var/lib/nova/instances directory 
on the compute node, this fstab configuration simply 'hides' that but 
the contents are still there.
This configuration is needed for forcing compute node to store 
instances onto the gluster shared  directory.


Hope it helps,
Marco.



2013/8/7 Guilherme Russi >


Hello guys,

 I've been trying to deploy live migration to my cloud using NFS
but without success, I'd like to know if somebody has tried live
migration with Gluster Storage, does it work? Any problem when
installing it? Following the documentation from its website is
easy to install?

The only thing that is left to my cloud works 100% is the live
migration.

Thank you all.

Guilherme.

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org

Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live Migration with Gluster Storage

2013-08-20 Thread Sylvain Bauza
Please note that there is huge improvement in terms of perfs if you 
choose to cherry-pick the libgfapi driver which has recently been 
implemented in Nova [1].


That would assume you use Cinder bootable volumes instead of classical 
QCOW2 instances, but the improvement is worth it.


-Sylvain


  Le 20/08/2013 09:29, Marco CONSONNI a écrit :

Hello Guilherme and all,

I was able to deploy live migration with gluster: I originally tried 
NFS like you did but I found problems.
On the contrary, Gluster works perfectly and it's quite easy to instal 
and configure.


This is what you need to do for a basic installation assuming that you 
have a node, working as a gluster server, with 2 disks and a set of 
compute nodes, working as gluster clients, that use the gluster shared 
directory for saving the running images.


-- On the gluster server --

1) Prepare the volumes

Assuming that you have two disks (/dev/sdb and /dev/sdc), create a 
primary partition on both of them using fdisk command.
Format the volumes with command: sudo mkfs.xfs -i size=512 /dev/sdb1 
and sudo mkfs.xfs -i size=512 /dev/sdc1
Prepare two directories for mounting the two volumes: sudo mkdir -p 
/export/brick1 and sudo mkdir -p /export/brick2
Configure /etc/fstab for mounting the volumes by adding the following 
lines:


/dev/sdb1/export/brick1xfsdefaults02

/dev/sdc1/export/brick2xfsdefaults02

Mount the two volumes with command  sudo mount --a

2) install and configure gluster server

sudo apt-get install glusterfs-server

sudo gluster volume create openstack stripe 2 server>:/export/brick1 :/export/brick2


sudo gluster volume start openstack

-- On the gluster clients / compute nodes  --

1) Install the gluster client with command sudo apt-get install 
glusterfs-client


2) in the /etc/fstab, configure a gluster filesystem with name 
/var/lib/nova/instances by adding the following line:


:/openstack /var/lib/nova/instances 
glusterfs defaults,_netdev 0 0


Note that if you have already have a /var/lib/nova/instances directory 
on the compute node, this fstab configuration simply 'hides' that but 
the contents are still there.
This configuration is needed for forcing compute node to store 
instances onto the gluster shared  directory.


Hope it helps,
Marco.



2013/8/7 Guilherme Russi >


Hello guys,

 I've been trying to deploy live migration to my cloud using NFS
but without success, I'd like to know if somebody has tried live
migration with Gluster Storage, does it work? Any problem when
installing it? Following the documentation from its website is
easy to install?

The only thing that is left to my cloud works 100% is the live
migration.

Thank you all.

Guilherme.

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org

Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack