Re: [Openstack] keystone installed by devstack redirect http request

2012-08-27 Thread Lu, Lianhao
You're right. The 301 is returned by my http proxy server. The reason is that 
the httplib2 python module keystone client uses would use the proxy server in 
the environment variable http_proxy, but the content of no_proxy environment 
variable is not actually used in establishing the connection.

Best Regards,
Lianhao

From: anti...@gmail.com [mailto:anti...@gmail.com] On Behalf Of Dolph Mathews
Sent: Friday, August 24, 2012 8:58 PM
To: Lu, Lianhao
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] keystone installed by devstack redirect http request

Keystone doesn't return 301's (ever). However, your 301 response headers show:

Server: BlueCoat-Security-Appliance

I'm guessing that wasn't installed by devstack :)

-Dolph
On Fri, Aug 24, 2012 at 3:03 AM, Lu, Lianhao 
lianhao...@intel.commailto:lianhao...@intel.com wrote:
Hi gang,

I used the devstack to install a all-one-one develop environment, but the 
keystone service seemed not working for me.

The host OS is Ubuntu 12.04 with a statically assigned IP address 
192.168.79.201. Since this host is in the internal network, I have to use a 
gateway(with 2 NICs of ip addresses 192.168.79.1 and 10.239.48.224) to login 
into the 192.168.79.201 host from the 10.239.48.0/24http://10.239.48.0/24 
network to run devstack.

After running devstack successfully, I found that the keystone service was not 
usable. It mysteriously redirected http requests to the gateway 
10.239.48.224(see below for the http response and keystone configurations). 
Does anyone know why I saw the redirect here? Thanks!

Best Regards,
-Lianhao

$ keystone --debug tenant-list
connect: (127.0.0.1, 5000)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 
127.0.0.1:5000http://127.0.0.1:5000\r\nContent-Length: 100\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: 
python-keystoneclient\r\n\r\n{auth: {tenantName: demo, 
passwordCredentials: {username: admin, password: 123456}}}'
reply: 'HTTP/1.1 301 Moved Permanently\r\n'
header: Server: BlueCoat-Security-Appliance
header: Location:http://10.239.48.224
header: Connection: Close
connect: (10.239.48.224, 80)
send: 'POST / HTTP/1.1\r\nHost: 10.239.48.224\r\nContent-Length: 
100\r\ncontent-type: application/json\r\naccept-encoding: gzip, 
deflate\r\nuser-agent: python-keystoneclient\r\n\r\n{auth: {tenantName: 
demo, passwordCredentials: {username: admin, password: 123456}}}'

--
-Dolph
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] multiple interfaces for floating IPs

2012-08-27 Thread Jay Pipes
No, not that I'm aware of -- at least not on the same compute node...
You can only specify public_interface=XXX for a single interface (or
bridge) used for all floating IPs for the VMs on a compute node.

Best,
-jay

On 08/20/2012 12:13 PM, Juris wrote:
 Greetings everyone,
 
 Just a quick question.
 Is it possible to assign floating IPs to multiple nova node interfaces?
 For instance if I have a server with 4 NICs and I'd like to use NIC1
 for private network, NIC2 for data and management, NIC3 for one of my
 public IP subnets and NIC4 for another public IP subnet?
 
 Best wishes,
 Juris
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cannot create snapshots of instances running not on the controller

2012-08-27 Thread Alessandro Tagliapietra
Btw, using

nova image-create --poll 4dcd5bb6-c65b-47dd-9c87-ba3fed624e22 Instance

works fine (nova command is run on 1st node), just it creates a new image and 
not a snapshot.

Best

-- 
Alessandro Tagliapietra | VISup srl
piazza 4 novembre 7
20124 Milano

http://www.visup.it

Il giorno 26/ago/2012, alle ore 18:49, Alessandro Tagliapietra 
tagliapietra.alessan...@gmail.com ha scritto:

 
 Il giorno 25/ago/2012, alle ore 01:15, Vishvananda Ishaya 
 vishvana...@gmail.com ha scritto:
 
 Actually it looks like a different error. For some reason container format 
 is being sent in as none on the second node.
 
 Is it possible the original image that you launched the vm from has been 
 deleted? For some reason it can't determine the container format.
 
 Nope, the image from which the instance has been created is still there.
 
 
 If not, can you also make sure that your versions of glance and 
 python-glanceclient are the same on both nodes?
 
 you should be able to do `pip freeze` to see the installed versions.
 
 I'm using the latest version from ubuntu 12.04 repo, btw, i can see only:
 
 glance==2012.1
 
 from pip freeze, no python-glanceclient there.
 
 
 
 Vish
 
 On Aug 24, 2012, at 12:10 AM, Alessandro Tagliapietra 
 tagliapietra.alessan...@gmail.com wrote:
 
 Hi Vish,
 
 I had already a setting:
 
 glance_api_servers=10.0.0.1:9292
 
 i've also tried to add
 
 glance_host=10.0.0.1
 
 but i got the same error.. Also, after changing configuration and 
 restarting nova-compute restarts all instances, is that normal?
 
 Best
 
 Alessandro
 
 Il giorno 23/ago/2012, alle ore 20:24, Vishvananda Ishaya 
 vishvana...@gmail.com ha scritto:
 
 looks like the compute node has a bad setting for glance_api_servers on 
 the second node.
 
 because glance_api_servers defaults to $glance_host:$glance_port, you 
 should be able to fix it by setting:
 
 glance_host = ip where glance is running
 
 in your nova.conf on the second node.
 
 Vish
 
 On Aug 23, 2012, at 10:15 AM, Alessandro Tagliapietra 
 tagliapietra.alessan...@gmail.com wrote:
 
 Hi all,
 
 i've a controller which is running all service and a secondary controller 
 which is un multi_host so it's running compute network and api-metadata. 
 From the dashboard i can successfully create snapshots of instances 
 running on the controller but when i try to create a snapshot of an 
 instance on a compute node i get in its logs:
 
 == /var/log/nova/nova-compute.log ==
 2012-08-23 19:08:14 ERROR nova.rpc.amqp 
 [req-66389a04-b071-4641-949b-3df04da85d08 
 a63f5293c5454a979bddff1415a216f6 e8c3367ff91d44b1ab1b14eb63f48bf7] 
 Exception during message handling
 2012-08-23 19:08:14 TRACE nova.rpc.amqp Traceback (most recent call last):
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in 
 _process_data
 2012-08-23 19:08:14 TRACE nova.rpc.amqp rval = 
 node_func(context=ctxt, **node_args)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp sys.exc_info())
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-08-23 19:08:14 TRACE nova.rpc.amqp self.gen.next()
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return function(self, 
 context, instance_uuid, *args, **kwargs)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 952, in 
 snapshot_instance
 2012-08-23 19:08:14 TRACE nova.rpc.amqp self.driver.snapshot(context, 
 instance_ref, image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 
 714, in snapshot
 2012-08-23 19:08:14 TRACE nova.rpc.amqp image_file)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 306, in 
 update
 2012-08-23 19:08:14 TRACE nova.rpc.amqp 
 _reraise_translated_image_exception(image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 304, in 
 update
 2012-08-23 19:08:14 TRACE nova.rpc.amqp image_meta = 
 client.update_image(image_id, image_meta, data)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/client.py, line 195, in 
 update_image
 2012-08-23 19:08:14 TRACE 

[Openstack] keystone questions

2012-08-27 Thread pat
Hello,

I have two questions regarding OpenStack Keystone:

Q1) The Folsom release supports domains. The domain can contain more tenants
and tenant cannot be shared between domains. Is this right? I think so, but
want to be sure.

Q2) Is it posible to have a “cluster” of the Keystones to avoid Keystone to be
a bottleneck? If so, could you point me to a “tutorial”? Or did I missed
something important?

Thanks a lot

 Pat


Freehosting PIPNI - http://www.pipni.cz/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum vs. Nova-network in Folsom

2012-08-27 Thread Chris Wright
* rob_hirschf...@dell.com (rob_hirschf...@dell.com) wrote:
 We've been discussing using Open vSwitch as the basis for non-Quantum Nova 
 Networking deployments in Folsom.  While not Quantum, it feels like we're 
 bringing Nova Networking a step closer to some of the core technologies that 
 Quantum uses.

To what end?

 I'm interested in hearing what other's in the community think about this 
 approach.

I don't think legacy nova networking should get features while working to
stabilize and improve quantum and nova/quantum integration.

thanks,
-chris

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread David Kang

 Vish,

 I think I don't understand your statement fully.
Unless we use different hostnames, (hostname, hypervisor_hostname) must be the 
same for all bare-metal nodes under a bare-metal nova-compute.

 Could you elaborate the following statement a little bit more?

 You would just have to use a little more than hostname. Perhaps
 (hostname, hypervisor_hostname) could be used to update the entry?
 

 Thanks,
 David



- Original Message -
 I would investigate changing the capabilities to key off of something
 other than hostname. It looks from the table structure like
 compute_nodes could be have a many-to-one relationship with services.
 You would just have to use a little more than hostname. Perhaps
 (hostname, hypervisor_hostname) could be used to update the entry?
 
 Vish
 
 On Aug 24, 2012, at 11:23 AM, David Kang dk...@isi.edu wrote:
 
 
   Vish,
 
   I've tested your code and did more testing.
  There are a couple of problems.
  1. host name should be unique. If not, any repetitive updates of new
  capabilities with the same host name are simply overwritten.
  2. We cannot generate arbitrary host names on the fly.
    The scheduler (I tested filter scheduler) gets host names from db.
    So, if a host name is not in the 'services' table, it is not
    considered by the scheduler at all.
 
  So, to make your suggestions possible, nova-compute should register
  N different host names in 'services' table,
  and N corresponding entries in 'compute_nodes' table.
  Here is an example:
 
  mysql select id, host, binary, topic, report_count, disabled,
  availability_zone from services;
  ++-++---+--+--+---+
  | id | host | binary | topic | report_count | disabled |
  | availability_zone |
  ++-++---+--+--+---+
  |  1 | bespin101 | nova-scheduler | scheduler | 17145 | 0 | nova |
  |  2 | bespin101 | nova-network | network | 16819 | 0 | nova |
  |  3 | bespin101-0 | nova-compute | compute | 16405 | 0 | nova |
  |  4 | bespin101-1 | nova-compute | compute | 1 | 0 | nova |
  ++-++---+--+--+---+
 
  mysql select id, service_id, hypervisor_hostname from
  compute_nodes;
  ++++
  | id | service_id | hypervisor_hostname |
  ++++
  |  1 | 3 | bespin101.east.isi.edu |
  |  2 | 4 | bespin101.east.isi.edu |
  ++++
 
   Then, nova db (compute_nodes table) has entries of all bare-metal
   nodes.
  What do you think of this approach.
  Do you have any better approach?
 
   Thanks,
   David
 
 
 
  - Original Message -
  To elaborate, something the below. I'm not absolutely sure you need
  to
  be able to set service_name and host, but this gives you the option
  to
  do so if needed.
 
  iff --git a/nova/manager.py b/nova/manager.py
  index c6711aa..c0f4669 100644
  --- a/nova/manager.py
  +++ b/nova/manager.py
  @@ -217,6 +217,8 @@ class SchedulerDependentManager(Manager):
 
  def update_service_capabilities(self, capabilities):
  Remember these capabilities to send on next periodic update.
  + if not isinstance(capabilities, list):
  + capabilities = [capabilities]
  self.last_capabilities = capabilities
 
  @periodic_task
  @@ -224,5 +226,8 @@ class SchedulerDependentManager(Manager):
  Pass data back to the scheduler at a periodic interval.
  if self.last_capabilities:
  LOG.debug(_('Notifying Schedulers of capabilities ...'))
  - self.scheduler_rpcapi.update_service_capabilities(context,
  - self.service_name, self.host, self.last_capabilities)
  + for capability_item in self.last_capabilities:
  + name = capability_item.get('service_name', self.service_name)
  + host = capability_item.get('host', self.host)
  + self.scheduler_rpcapi.update_service_capabilities(context,
  + name, host, capability_item)
 
  On Aug 21, 2012, at 1:28 PM, David Kang dk...@isi.edu wrote:
 
 
   Hi Vish,
 
   We are trying to change our code according to your comment.
  I want to ask a question.
 
  a) modify driver.get_host_stats to be able to return a list of
  host
  stats instead of just one. Report the whole list back to the
  scheduler. We could modify the receiving end to accept a list
  as
  well
  or just make multiple calls to
  self.update_service_capabilities(capabilities)
 
   Modifying driver.get_host_stats to return a list of host stats is
   easy.
  Calling muliple calls to
  self.update_service_capabilities(capabilities) doesn't seem to
  work,
  because 'capabilities' is overwritten each time.
 
   Modifying the receiving end to accept a list seems to be easy.
  However, 'capabilities' is assumed to be dictionary by all other
  scheduler routines,
  it looks like that we have to change all of them to handle
  'capability' as a list of dictionary.
 
   If my 

Re: [Openstack] keystone questions

2012-08-27 Thread Joseph Heck
Hi Pat,

On Aug 27, 2012, at 8:09 AM, pat p...@xvalheru.org wrote:
 I have two questions regarding OpenStack Keystone:
 
 Q1) The Folsom release supports domains. The domain can contain more tenants
 and tenant cannot be shared between domains. Is this right? I think so, but
 want to be sure.

I'm afraid it doesn't. We didn't make sufficient progress with the V3 API 
(which is what incorporates domains) to include that with the Folsom release. 
We expect this to be available with the grizzly release.

 Q2) Is it posible to have a “cluster” of the Keystones to avoid Keystone to be
 a bottleneck? If so, could you point me to a “tutorial”? Or did I missed
 something important?

If by cluster you mean multiple instances to handle requests, then absolutely 
- yes. For this particular response, I'll assume you're using a SQL backend for 
Keystone. Generally you maintain a single database - wether that's an HA 
cluster or a single instance, and any number of Keystone service instances can 
point to and use that.

-joe


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread Vishvananda Ishaya
Hi David,

I just checked out the code more extensively and I don't see why you need to 
create a new service entry for each compute_node entry. The code in 
host_manager to get all host states explicitly gets all compute_node entries. I 
don't see any reason why multiple compute_node entries can't share the same 
service. I don't see any place in the scheduler that is grabbing records by 
service instead of by compute node, but if there is one that I missed, it 
should be fairly easy to change it.

The compute_node record is created in the compute/resource_tracker.py as of a 
recent commit, so I think the path forward would be to make sure that one of 
the records is created for each bare metal node by the bare metal compute, 
perhaps by having multiple resource_trackers. 

Vish

On Aug 27, 2012, at 9:40 AM, David Kang dk...@isi.edu wrote:

 
  Vish,
 
  I think I don't understand your statement fully.
 Unless we use different hostnames, (hostname, hypervisor_hostname) must be 
 the 
 same for all bare-metal nodes under a bare-metal nova-compute.
 
  Could you elaborate the following statement a little bit more?
 
 You would just have to use a little more than hostname. Perhaps
 (hostname, hypervisor_hostname) could be used to update the entry?
 
 
  Thanks,
  David
 
 
 
 - Original Message -
 I would investigate changing the capabilities to key off of something
 other than hostname. It looks from the table structure like
 compute_nodes could be have a many-to-one relationship with services.
 You would just have to use a little more than hostname. Perhaps
 (hostname, hypervisor_hostname) could be used to update the entry?
 
 Vish
 
 On Aug 24, 2012, at 11:23 AM, David Kang dk...@isi.edu wrote:
 
 
  Vish,
 
  I've tested your code and did more testing.
 There are a couple of problems.
 1. host name should be unique. If not, any repetitive updates of new
 capabilities with the same host name are simply overwritten.
 2. We cannot generate arbitrary host names on the fly.
   The scheduler (I tested filter scheduler) gets host names from db.
   So, if a host name is not in the 'services' table, it is not
   considered by the scheduler at all.
 
 So, to make your suggestions possible, nova-compute should register
 N different host names in 'services' table,
 and N corresponding entries in 'compute_nodes' table.
 Here is an example:
 
 mysql select id, host, binary, topic, report_count, disabled,
 availability_zone from services;
 ++-++---+--+--+---+
 | id | host | binary | topic | report_count | disabled |
 | availability_zone |
 ++-++---+--+--+---+
 |  1 | bespin101 | nova-scheduler | scheduler | 17145 | 0 | nova |
 |  2 | bespin101 | nova-network | network | 16819 | 0 | nova |
 |  3 | bespin101-0 | nova-compute | compute | 16405 | 0 | nova |
 |  4 | bespin101-1 | nova-compute | compute | 1 | 0 | nova |
 ++-++---+--+--+---+
 
 mysql select id, service_id, hypervisor_hostname from
 compute_nodes;
 ++++
 | id | service_id | hypervisor_hostname |
 ++++
 |  1 | 3 | bespin101.east.isi.edu |
 |  2 | 4 | bespin101.east.isi.edu |
 ++++
 
  Then, nova db (compute_nodes table) has entries of all bare-metal
  nodes.
 What do you think of this approach.
 Do you have any better approach?
 
  Thanks,
  David
 
 
 
 - Original Message -
 To elaborate, something the below. I'm not absolutely sure you need
 to
 be able to set service_name and host, but this gives you the option
 to
 do so if needed.
 
 iff --git a/nova/manager.py b/nova/manager.py
 index c6711aa..c0f4669 100644
 --- a/nova/manager.py
 +++ b/nova/manager.py
 @@ -217,6 +217,8 @@ class SchedulerDependentManager(Manager):
 
 def update_service_capabilities(self, capabilities):
 Remember these capabilities to send on next periodic update.
 + if not isinstance(capabilities, list):
 + capabilities = [capabilities]
 self.last_capabilities = capabilities
 
 @periodic_task
 @@ -224,5 +226,8 @@ class SchedulerDependentManager(Manager):
 Pass data back to the scheduler at a periodic interval.
 if self.last_capabilities:
 LOG.debug(_('Notifying Schedulers of capabilities ...'))
 - self.scheduler_rpcapi.update_service_capabilities(context,
 - self.service_name, self.host, self.last_capabilities)
 + for capability_item in self.last_capabilities:
 + name = capability_item.get('service_name', self.service_name)
 + host = capability_item.get('host', self.host)
 + self.scheduler_rpcapi.update_service_capabilities(context,
 + name, host, capability_item)
 
 On Aug 21, 2012, at 1:28 PM, David Kang dk...@isi.edu wrote:
 
 
  Hi Vish,
 
  We are trying to change our code according to your comment.
 I want to ask a 

Re: [Openstack] Cannot create snapshots of instances running not on the controller

2012-08-27 Thread Vishvananda Ishaya
a snapshot and an image are the same. The only difference is a piece of 
metadata saying what instance the snapshot came from.

Vish

On Aug 27, 2012, at 6:06 AM, Alessandro Tagliapietra 
tagliapietra.alessan...@gmail.com wrote:

 Btw, using
 
 nova image-create --poll 4dcd5bb6-c65b-47dd-9c87-ba3fed624e22 Instance
 
 works fine (nova command is run on 1st node), just it creates a new image and 
 not a snapshot.
 
 Best
 
 -- 
 Alessandro Tagliapietra | VISup srl
 piazza 4 novembre 7
 20124 Milano
 
 http://www.visup.it
 
 Il giorno 26/ago/2012, alle ore 18:49, Alessandro Tagliapietra 
 tagliapietra.alessan...@gmail.com ha scritto:
 
 
 Il giorno 25/ago/2012, alle ore 01:15, Vishvananda Ishaya 
 vishvana...@gmail.com ha scritto:
 
 Actually it looks like a different error. For some reason container format 
 is being sent in as none on the second node.
 
 Is it possible the original image that you launched the vm from has been 
 deleted? For some reason it can't determine the container format.
 
 Nope, the image from which the instance has been created is still there.
 
 
 If not, can you also make sure that your versions of glance and 
 python-glanceclient are the same on both nodes?
 
 you should be able to do `pip freeze` to see the installed versions.
 
 I'm using the latest version from ubuntu 12.04 repo, btw, i can see only:
 
 glance==2012.1
 
 from pip freeze, no python-glanceclient there.
 
 
 
 Vish
 
 On Aug 24, 2012, at 12:10 AM, Alessandro Tagliapietra 
 tagliapietra.alessan...@gmail.com wrote:
 
 Hi Vish,
 
 I had already a setting:
 
 glance_api_servers=10.0.0.1:9292
 
 i've also tried to add
 
 glance_host=10.0.0.1
 
 but i got the same error.. Also, after changing configuration and 
 restarting nova-compute restarts all instances, is that normal?
 
 Best
 
 Alessandro
 
 Il giorno 23/ago/2012, alle ore 20:24, Vishvananda Ishaya 
 vishvana...@gmail.com ha scritto:
 
 looks like the compute node has a bad setting for glance_api_servers on 
 the second node.
 
 because glance_api_servers defaults to $glance_host:$glance_port, you 
 should be able to fix it by setting:
 
 glance_host = ip where glance is running
 
 in your nova.conf on the second node.
 
 Vish
 
 On Aug 23, 2012, at 10:15 AM, Alessandro Tagliapietra 
 tagliapietra.alessan...@gmail.com wrote:
 
 Hi all,
 
 i've a controller which is running all service and a secondary 
 controller which is un multi_host so it's running compute network and 
 api-metadata. From the dashboard i can successfully create snapshots of 
 instances running on the controller but when i try to create a snapshot 
 of an instance on a compute node i get in its logs:
 
 == /var/log/nova/nova-compute.log ==
 2012-08-23 19:08:14 ERROR nova.rpc.amqp 
 [req-66389a04-b071-4641-949b-3df04da85d08 
 a63f5293c5454a979bddff1415a216f6 e8c3367ff91d44b1ab1b14eb63f48bf7] 
 Exception during message handling
 2012-08-23 19:08:14 TRACE nova.rpc.amqp Traceback (most recent call 
 last):
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in 
 _process_data
 2012-08-23 19:08:14 TRACE nova.rpc.amqp rval = 
 node_func(context=ctxt, **node_args)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in 
 wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp sys.exc_info())
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-08-23 19:08:14 TRACE nova.rpc.amqp self.gen.next()
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return function(self, 
 context, instance_uuid, *args, **kwargs)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 952, in 
 snapshot_instance
 2012-08-23 19:08:14 TRACE nova.rpc.amqp 
 self.driver.snapshot(context, instance_ref, image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in 
 wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 
 714, in snapshot
 2012-08-23 19:08:14 TRACE nova.rpc.amqp image_file)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 306, in 
 update
 2012-08-23 19:08:14 TRACE nova.rpc.amqp 
 _reraise_translated_image_exception(image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 304, in 
 update
 2012-08-23 

Re: [Openstack] Quantum vs. Nova-network in Folsom

2012-08-27 Thread Dan Wendlandt
On Sun, Aug 26, 2012 at 12:39 PM,  rob_hirschf...@dell.com wrote:
 Stackers,

 I think this is a reasonable approach and appreciate the clarification of 
 use-cases.

 We've been discussing using Open vSwitch as the basis for non-Quantum Nova 
 Networking deployments in Folsom.  While not Quantum, it feels like we're 
 bringing Nova Networking a step closer to some of the core technologies that 
 Quantum uses.

 I'm interested in hearing what other's in the community think about this 
 approach.

One of the main reasons we introduced Quantum was to support
alternative switching technologies like Open vSwitch.  I'd like to
hear more about your thoughts, but at first glance, I'm not sure
there's a good way to leverage Open vSwitch in a meaningful way with
existing nova-network managers, since those network managers are so
tightly tied to using the basic linux bridge + vlans.

Dan


 Rob

 -Original Message-
 From: openstack-bounces+rob_hirschfeld=dell@lists.launchpad.net 
 [mailto:openstack-bounces+rob_hirschfeld=dell@lists.launchpad.net] On 
 Behalf Of Dan Wendlandt
 Sent: Friday, August 24, 2012 5:39 PM
 To: openstack@lists.launchpad.net; OpenStack Development Mailing List
 Subject: [Openstack] Quantum vs. Nova-network in Folsom

 tl;dr  both Quantum and nova-network will be core and fully supported in 
 Folsom.

 Hi folks,

 Thierry, Vish and I have been spending some talking about OpenStack 
 networking in Folsom, and in particular the availability of nova-network now 
 that Quantum is a core project.  We wanted to share our current thinking with 
 the community to avoid confusion.

 With a project like OpenStack, there's a fundamental trade-off between the 
 rate of introducing new capabilities and the desire for stability and 
 backward compatibility.  We agreed that OpenStack is a point in its growth 
 cycle where the cost of disruptive changes is high.  As a result, we've 
 decided that even with Quantum being core in Folsom, we will also continue to 
 support nova-network as it currently exists in Folsom.  There is, of couse, 
 overhead to this approach, but we think it is worth it.

 With this in mind, a key question becomes: how do we direct users to the 
 networking option that is right for them.  We have the following
 guidelines:

 1) For users who require only very basic networking (e.g., nova-network Flat, 
 FlatDHCP) there's little difference between Quantum and nova-network is such 
 basic use cases, so using nova's built-in networking for these basic use 
 cases makes sense.

 2) There are many use cases (e.g., tenant API for defined topologies and 
 addresses) and advanced network technologies (e.g., tunneling rather than 
 VLANs) that Quantum enables that are simply not possible with nova-network, 
 so if these advanced capabilities are important to someone deploying 
 OpenStack, they clearly need to use Quantum.

 3) There are a few things that are possible in nova-network, but not in 
 Quantum.  Multi-host is the most significant one, but there are bound to be 
 other gaps, some of which we will uncover only when people try their 
 particular use case with Quantum.  For these, users will have to use 
 nova-network, with the gaps being covered in Quantum during Grizzly.

 As a result, we plan to structure the docs so that you can do a basic 
 functionality Nova setup with flat networking without requiring Quantum.  For 
 anything beyond that, we will have an advanced networking section, which 
 describes the different advanced use of OpenStack networking with Quantum, 
 and also highlight reasons that a user may still want to use nova-networking 
 over Quantum.

 Moving beyond Folsom, we expect to fully freeze the addition of new 
 functionality to nova-network, and likely deprecate at least some portions of 
 the existing nova-network functionality.  Likely this will leave the basic 
 flat and flat + dhcp nova networking intact, but reduce complexity in the 
 nova codebase by removing more advanced networking scenarios that can also be 
 achieved via Quantum.  This means that even those using nova-network in 
 Folsom should still be evaluating Quantum if they networking needs beyond 
 flat networking, such that this feedback can be incorporated into the Grizzly 
 deliverable of Quantum.

 Thanks,

 Dan


 --
 ~~~
 Dan Wendlandt
 Nicira, Inc: www.nicira.com
 twitter: danwendlandt
 ~~~

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : 

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread David Kang

 Hi Vish,

 I think I understand your idea.
One service entry with multiple bare-metal compute_node entries are registered 
at the start of bare-metal nova-compute.
'hypervisor_hostname' must be different for each bare-metal machine, such as 
'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com', etc.)
But their IP addresses must be the IP address of bare-metal nova-compute, such 
that an instance is casted 
not to bare-metal machine directly but to bare-metal nova-compute.

 One extension we need to do at the scheduler side is using (host, 
hypervisor_hostname) instead of (host) only in host_manager.py.
'HostManager.service_state' is { host : { service  : { cap k : v }}}.
It needs to be changed to { host : { service : { hypervisor_name : { cap 
k : v .

Most functions of HostState need to be changed to use (host, hypervisor_name) 
pair to identify a compute node. 

 Are we on the same page, now?

 Thanks,
 David

- Original Message -
 Hi David,
 
 I just checked out the code more extensively and I don't see why you
 need to create a new service entry for each compute_node entry. The
 code in host_manager to get all host states explicitly gets all
 compute_node entries. I don't see any reason why multiple compute_node
 entries can't share the same service. I don't see any place in the
 scheduler that is grabbing records by service instead of by compute
 node, but if there is one that I missed, it should be fairly easy to
 change it.
 
 The compute_node record is created in the compute/resource_tracker.py
 as of a recent commit, so I think the path forward would be to make
 sure that one of the records is created for each bare metal node by
 the bare metal compute, perhaps by having multiple resource_trackers.
 
 Vish
 
 On Aug 27, 2012, at 9:40 AM, David Kang dk...@isi.edu wrote:
 
 
   Vish,
 
   I think I don't understand your statement fully.
  Unless we use different hostnames, (hostname, hypervisor_hostname)
  must be the
  same for all bare-metal nodes under a bare-metal nova-compute.
 
   Could you elaborate the following statement a little bit more?
 
  You would just have to use a little more than hostname. Perhaps
  (hostname, hypervisor_hostname) could be used to update the entry?
 
 
   Thanks,
   David
 
 
 
  - Original Message -
  I would investigate changing the capabilities to key off of
  something
  other than hostname. It looks from the table structure like
  compute_nodes could be have a many-to-one relationship with
  services.
  You would just have to use a little more than hostname. Perhaps
  (hostname, hypervisor_hostname) could be used to update the entry?
 
  Vish
 
  On Aug 24, 2012, at 11:23 AM, David Kang dk...@isi.edu wrote:
 
 
   Vish,
 
   I've tested your code and did more testing.
  There are a couple of problems.
  1. host name should be unique. If not, any repetitive updates of
  new
  capabilities with the same host name are simply overwritten.
  2. We cannot generate arbitrary host names on the fly.
The scheduler (I tested filter scheduler) gets host names from
db.
So, if a host name is not in the 'services' table, it is not
considered by the scheduler at all.
 
  So, to make your suggestions possible, nova-compute should
  register
  N different host names in 'services' table,
  and N corresponding entries in 'compute_nodes' table.
  Here is an example:
 
  mysql select id, host, binary, topic, report_count, disabled,
  availability_zone from services;
  ++-++---+--+--+---+
  | id | host | binary | topic | report_count | disabled |
  | availability_zone |
  ++-++---+--+--+---+
  |  1 | bespin101 | nova-scheduler | scheduler | 17145 | 0 | nova |
  |  2 | bespin101 | nova-network | network | 16819 | 0 | nova |
  |  3 | bespin101-0 | nova-compute | compute | 16405 | 0 | nova |
  |  4 | bespin101-1 | nova-compute | compute | 1 | 0 | nova |
  ++-++---+--+--+---+
 
  mysql select id, service_id, hypervisor_hostname from
  compute_nodes;
  ++++
  | id | service_id | hypervisor_hostname |
  ++++
  |  1 | 3 | bespin101.east.isi.edu |
  |  2 | 4 | bespin101.east.isi.edu |
  ++++
 
   Then, nova db (compute_nodes table) has entries of all bare-metal
   nodes.
  What do you think of this approach.
  Do you have any better approach?
 
   Thanks,
   David
 
 
 
  - Original Message -
  To elaborate, something the below. I'm not absolutely sure you
  need
  to
  be able to set service_name and host, but this gives you the
  option
  to
  do so if needed.
 
  iff --git a/nova/manager.py b/nova/manager.py
  index c6711aa..c0f4669 100644
  --- a/nova/manager.py
  +++ b/nova/manager.py
  @@ -217,6 

[Openstack] Fwd: nova-compute on VirtualBox with qemu

2012-08-27 Thread andi abes
-- Forwarded message --
From: andi abes andi.a...@gmail.com
Date: Mon, Aug 27, 2012 at 1:54 PM
Subject: nova-compute on VirtualBox with qemu
To: openstack-operat...@lists.openstack.org


I'm using Essex on virtual box, and am having some issues getting
nova-compute to not hate me that much.
The error I'm getting is: libvir: QEMU error : internal error Cannot
find suitable emulator for x86_64
Running the same steps as in [1] seems to reproduce the same behavior.

The VB guest is 12.04.

nova-compute.conf has:
[DEFAULT]
libvirt_type=qemu

I guess my question is - where do i supply libvirt / nova the magical
'disable accel' flags? (i.e. '-machine accel=kvm:tcg', which seem to
make qemu happy).


TIA,
a.


[1] https://lists.fedoraproject.org/pipermail/virt/2012-July/003358.html



(adding openstack, and some more details)

Versions:
qemu-system-x86_64 --version
QEMU emulator version 1.0.50 (qemu-kvm-devel), Copyright (c) 2003-2008
Fabrice Bellard

libvirtd --version
libvirtd (libvirt) 0.9.9

tia,
a

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread Michael J Fork

openstack-bounces+mjfork=us.ibm@lists.launchpad.net wrote on 08/27/2012
02:58:56 PM:

 From: David Kang dk...@isi.edu
 To: Vishvananda Ishaya vishvana...@gmail.com,
 Cc: OpenStack Development Mailing List openstack-
 d...@lists.openstack.org, openstack@lists.launchpad.net \
 (openstack@lists.launchpad.net\) openstack@lists.launchpad.net
 Date: 08/27/2012 03:06 PM
 Subject: Re: [Openstack] [openstack-dev] Discussion about where to
 put database for bare-metal provisioning (review 10726)
 Sent by: openstack-bounces+mjfork=us.ibm@lists.launchpad.net


  Hi Vish,

  I think I understand your idea.
 One service entry with multiple bare-metal compute_node entries are
 registered at the start of bare-metal nova-compute.
 'hypervisor_hostname' must be different for each bare-metal machine,
 such as 'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com', etc.)
 But their IP addresses must be the IP address of bare-metal nova-
 compute, such that an instance is casted
 not to bare-metal machine directly but to bare-metal nova-compute.

I believe the change here is to cast out the message to the
topic.service-hostname. Existing code sends it to the compute_node
hostname (see line 202 of nova/scheduler/filter_scheduler.py, specifically
host=weighted_host.host_state.host).  Changing that to cast to the service
hostname would send the message to the bare-metal proxy node and should not
have an effect on current deployments since the service hostname and the
host_state.host would always be equal.  This model will also let you keep
the bare-metal compute node IP in the compute node table.

  One extension we need to do at the scheduler side is using (host,
 hypervisor_hostname) instead of (host) only in host_manager.py.
 'HostManager.service_state' is { host : { service  : { cap k : v }}}.
 It needs to be changed to { host : { service : {
 hypervisor_name : { cap k : v .
 Most functions of HostState need to be changed to use (host,
 hypervisor_name) pair to identify a compute node.

Would an alternative here be to change the top level host to be the
hypervisor_hostname and enforce uniqueness?

  Are we on the same page, now?

  Thanks,
  David

 - Original Message -
  Hi David,
 
  I just checked out the code more extensively and I don't see why you
  need to create a new service entry for each compute_node entry. The
  code in host_manager to get all host states explicitly gets all
  compute_node entries. I don't see any reason why multiple compute_node
  entries can't share the same service. I don't see any place in the
  scheduler that is grabbing records by service instead of by compute
  node, but if there is one that I missed, it should be fairly easy to
  change it.
 
  The compute_node record is created in the compute/resource_tracker.py
  as of a recent commit, so I think the path forward would be to make
  sure that one of the records is created for each bare metal node by
  the bare metal compute, perhaps by having multiple resource_trackers.
 
  Vish
 
  On Aug 27, 2012, at 9:40 AM, David Kang dk...@isi.edu wrote:
 
  
Vish,
  
I think I don't understand your statement fully.
   Unless we use different hostnames, (hostname, hypervisor_hostname)
   must be the
   same for all bare-metal nodes under a bare-metal nova-compute.
  
Could you elaborate the following statement a little bit more?
  
   You would just have to use a little more than hostname. Perhaps
   (hostname, hypervisor_hostname) could be used to update the entry?
  
  
Thanks,
David
  
  
  
   - Original Message -
   I would investigate changing the capabilities to key off of
   something
   other than hostname. It looks from the table structure like
   compute_nodes could be have a many-to-one relationship with
   services.
   You would just have to use a little more than hostname. Perhaps
   (hostname, hypervisor_hostname) could be used to update the entry?
  
   Vish
  
   On Aug 24, 2012, at 11:23 AM, David Kang dk...@isi.edu wrote:
  
  
Vish,
  
I've tested your code and did more testing.
   There are a couple of problems.
   1. host name should be unique. If not, any repetitive updates of
   new
   capabilities with the same host name are simply overwritten.
   2. We cannot generate arbitrary host names on the fly.
 The scheduler (I tested filter scheduler) gets host names from
 db.
 So, if a host name is not in the 'services' table, it is not
 considered by the scheduler at all.
  
   So, to make your suggestions possible, nova-compute should
   register
   N different host names in 'services' table,
   and N corresponding entries in 'compute_nodes' table.
   Here is an example:
  
   mysql select id, host, binary, topic, report_count, disabled,
   availability_zone from services;
   ++-++---
 +--+--+---+
   | id | host | binary | topic | report_count | disabled |
   | availability_zone |
   

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread VTJ NOTSU Arata

Hello all,

It seems that the requirement for keys of HostManager.service_state is just to 
be unique;
these do not have to be valid hostnames or queues (Already, existing code casts
messages to topic.service-hostname. Michael, doesn't it?). So, I tried
'host/bm_node_id' as 'host' of capabilities. Then, 
HostManager.service_state is:
 { host/bm_node_id : { service : { cap k : v }}}.
So far, it works fine. How about this way?

I paste relevant code in the bottom of this mail just to make sure.
NOTE: I added a new column 'nodename' to compute_nodes to store bm_node_id,
but storing it in 'hypervisor_hostname' may be a right solution.

(The whole code is in our github(NTTdocomo-openstack/nova, branch 'multinode'),
multiple resource_trackers are also implemented.)

Thanks,
Arata
 


diff --git a/nova/scheduler/host_manager.py b/nova/scheduler/host_manager.py
index 33ba2c1..567729f 100644
--- a/nova/scheduler/host_manager.py
+++ b/nova/scheduler/host_manager.py
@@ -98,9 +98,10 @@ class HostState(object):
 previously used and lock down access.
 
 
-def __init__(self, host, topic, capabilities=None, service=None):

+def __init__(self, host, topic, capabilities=None, service=None, 
nodename=None):
 self.host = host
 self.topic = topic
+self.nodename = nodename
 
 # Read-only capability dicts
 
@@ -175,8 +176,8 @@ class HostState(object):

 return True
 
 def __repr__(self):

-return (host '%s': free_ram_mb:%s free_disk_mb:%s %
-(self.host, self.free_ram_mb, self.free_disk_mb))
+return (host '%s' / nodename '%s': free_ram_mb:%s free_disk_mb:%s %
+(self.host, self.nodename, self.free_ram_mb, 
self.free_disk_mb))
 
 
 class HostManager(object):

@@ -268,11 +269,16 @@ class HostManager(object):
 LOG.warn(_(No service for compute ID %s) % compute['id'])
 continue
 host = service['host']
-capabilities = self.service_states.get(host, None)
+if compute['nodename']:
+host_node = '%s/%s' % (host, compute['nodename'])
+else:
+host_node = host
+capabilities = self.service_states.get(host_node, None)
 host_state = self.host_state_cls(host, topic,
 capabilities=capabilities,
-service=dict(service.iteritems()))
+service=dict(service.iteritems()),
+nodename=compute['nodename'])
 host_state.update_from_compute_node(compute)
-host_state_map[host] = host_state
+host_state_map[host_node] = host_state
 
 return host_state_map


diff --git a/nova/virt/baremetal/driver.py b/nova/virt/baremetal/driver.py
index 087d1b6..dbcfbde 100644
--- a/nova/virt/baremetal/driver.py
+++ b/nova/virt/baremetal/driver.py
(skip...)
+def _create_node_cap(self, node):
+dic = self._node_resources(node)
+dic['host'] = '%s/%s' % (FLAGS.host, node['id'])
+dic['cpu_arch'] = self._extra_specs.get('cpu_arch')
+dic['instance_type_extra_specs'] = self._extra_specs
+dic['supported_instances'] = self._supported_instances
+# TODO: put node's extra specs
+return dic
 
 def get_host_stats(self, refresh=False):

-return self._get_host_stats()
+caps = []
+context = nova_context.get_admin_context()
+nodes = bmdb.bm_node_get_all(context,
+ service_host=FLAGS.host)
+for node in nodes:
+node_cap = self._create_node_cap(node)
+caps.append(node_cap)
+return caps


(2012/08/28 5:55), Michael J Fork wrote:

openstack-bounces+mjfork=us.ibm@lists.launchpad.net wrote on 08/27/2012 
02:58:56 PM:

  From: David Kang dk...@isi.edu
  To: Vishvananda Ishaya vishvana...@gmail.com,
  Cc: OpenStack Development Mailing List openstack-
  d...@lists.openstack.org, openstack@lists.launchpad.net \
  (openstack@lists.launchpad.net\) openstack@lists.launchpad.net
  Date: 08/27/2012 03:06 PM
  Subject: Re: [Openstack] [openstack-dev] Discussion about where to
  put database for bare-metal provisioning (review 10726)
  Sent by: openstack-bounces+mjfork=us.ibm@lists.launchpad.net
 
 
   Hi Vish,
 
   I think I understand your idea.
  One service entry with multiple bare-metal compute_node entries are
  registered at the start of bare-metal nova-compute.
  'hypervisor_hostname' must be different for each bare-metal machine,
  such as 'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com', etc.)
  But their IP addresses must be the IP address of bare-metal nova-
  compute, such that an instance is casted
  not to bare-metal machine directly but to bare-metal nova-compute.

I believe the change here is to cast out the message to the 
topic.service-hostname. Existing code sends it to the compute_node hostname 
(see line 202 of nova/scheduler/filter_scheduler.py, 

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread Michael J Fork

David Kang dk...@isi.edu wrote on 08/27/2012 05:22:37 PM:

 From: David Kang dk...@isi.edu
 To: Michael J Fork/Rochester/IBM@IBMUS,
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
 openstack@lists.launchpad.net, openstack-bounces+mjfork=us ibm com
 openstack-bounces+mjfork=us.ibm@lists.launchpad.net, OpenStack
 Development Mailing List openstack-...@lists.openstack.org,
 Vishvananda Ishaya vishvana...@gmail.com
 Date: 08/27/2012 05:22 PM
 Subject: Re: [Openstack] [openstack-dev] Discussion about where to
 put database for bare-metal provisioning (review 10726)


  Michael,

  I think you mean compute_node hostname as 'hypervisor_hostname'
 field in the 'compute_node' table.

Yes.  This value would be part of the payload of the message cast to the
proxy node so that it knows who the request was directed to.

 What do you mean by service hostname?
 I don't see such field in the 'service' table in the database.
 Is it in some other table?
 Or do you suggest adding 'service_hostname' field in the 'service' table?

The host field in the services table.  This value would be used as the
target of the rpc cast so that the proxy node would receive the message.


  Thanks,
  David

 - Original Message -
  openstack-bounces+mjfork=us.ibm@lists.launchpad.net wrote on
  08/27/2012 02:58:56 PM:
 
   From: David Kang dk...@isi.edu
   To: Vishvananda Ishaya vishvana...@gmail.com,
   Cc: OpenStack Development Mailing List openstack-
   d...@lists.openstack.org, openstack@lists.launchpad.net \
   (openstack@lists.launchpad.net\) openstack@lists.launchpad.net
   Date: 08/27/2012 03:06 PM
   Subject: Re: [Openstack] [openstack-dev] Discussion about where to
   put database for bare-metal provisioning (review 10726)
   Sent by: openstack-bounces+mjfork=us.ibm@lists.launchpad.net
  
  
   Hi Vish,
  
   I think I understand your idea.
   One service entry with multiple bare-metal compute_node entries are
   registered at the start of bare-metal nova-compute.
   'hypervisor_hostname' must be different for each bare-metal machine,
   such as 'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com', etc.)
   But their IP addresses must be the IP address of bare-metal nova-
   compute, such that an instance is casted
   not to bare-metal machine directly but to bare-metal nova-compute.
 
  I believe the change here is to cast out the message to the
  topic.service-hostname. Existing code sends it to the compute_node
  hostname (see line 202 of nova/scheduler/filter_scheduler.py,
  specifically host=weighted_host.host_state.host). Changing that to
  cast to the service hostname would send the message to the bare-metal
  proxy node and should not have an effect on current deployments since
  the service hostname and the host_state.host would always be equal.
  This model will also let you keep the bare-metal compute node IP in
  the compute node table.
 
   One extension we need to do at the scheduler side is using (host,
   hypervisor_hostname) instead of (host) only in host_manager.py.
   'HostManager.service_state' is { host : { service  : { cap k : v
   }}}.
   It needs to be changed to { host : { service : {
   hypervisor_name : { cap k : v .
   Most functions of HostState need to be changed to use (host,
   hypervisor_name) pair to identify a compute node.
 
  Would an alternative here be to change the top level host to be the
  hypervisor_hostname and enforce uniqueness?
 
   Are we on the same page, now?
  
   Thanks,
   David
  
   - Original Message -
Hi David,
   
I just checked out the code more extensively and I don't see why
you
need to create a new service entry for each compute_node entry.
The
code in host_manager to get all host states explicitly gets all
compute_node entries. I don't see any reason why multiple
compute_node
entries can't share the same service. I don't see any place in the
scheduler that is grabbing records by service instead of by
compute
node, but if there is one that I missed, it should be fairly easy
to
change it.
   
The compute_node record is created in the
compute/resource_tracker.py
as of a recent commit, so I think the path forward would be to
make
sure that one of the records is created for each bare metal node
by
the bare metal compute, perhaps by having multiple
resource_trackers.
   
Vish
   
On Aug 27, 2012, at 9:40 AM, David Kang dk...@isi.edu wrote:
   

 Vish,

 I think I don't understand your statement fully.
 Unless we use different hostnames, (hostname,
 hypervisor_hostname)
 must be the
 same for all bare-metal nodes under a bare-metal nova-compute.

 Could you elaborate the following statement a little bit more?

 You would just have to use a little more than hostname. Perhaps
 (hostname, hypervisor_hostname) could be used to update the
 entry?


 Thanks,
 David
   

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread David Kang

 Michael,

 It is a little confusing without knowing the assumptions of your suggestions.
First of all, I want to make sure that you agree on the followings:
1. one entry for a bare-metal machines in the 'compute_node' table.
2. one entry for bare-metal nova-compute that manages N bare-metal machines in 
the 'service' table.

In addition to that I think you suggest augmenting 'host' field in the 
'service' table,
such that 'host' field can be used for RPC.
(I don't think the current 'host' field can be used for that purpose now.)

 David

- Original Message -
 David Kang dk...@isi.edu wrote on 08/27/2012 05:22:37 PM:
 
  From: David Kang dk...@isi.edu
  To: Michael J Fork/Rochester/IBM@IBMUS,
  Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
  openstack@lists.launchpad.net, openstack-bounces+mjfork=us ibm com
  openstack-bounces+mjfork=us.ibm@lists.launchpad.net, OpenStack
  Development Mailing List openstack-...@lists.openstack.org,
  Vishvananda Ishaya vishvana...@gmail.com
  Date: 08/27/2012 05:22 PM
  Subject: Re: [Openstack] [openstack-dev] Discussion about where to
  put database for bare-metal provisioning (review 10726)
 
 
  Michael,
 
  I think you mean compute_node hostname as 'hypervisor_hostname'
  field in the 'compute_node' table.
 
 Yes. This value would be part of the payload of the message cast to
 the proxy node so that it knows who the request was directed to.
 
  What do you mean by service hostname?
  I don't see such field in the 'service' table in the database.
  Is it in some other table?
  Or do you suggest adding 'service_hostname' field in the 'service'
  table?
 
 The host field in the services table. This value would be used as
 the target of the rpc cast so that the proxy node would receive the
 message.
 
 
  Thanks,
  David
 
  - Original Message -
   openstack-bounces+mjfork=us.ibm@lists.launchpad.net wrote on
   08/27/2012 02:58:56 PM:
  
From: David Kang dk...@isi.edu
To: Vishvananda Ishaya vishvana...@gmail.com,
Cc: OpenStack Development Mailing List openstack-
d...@lists.openstack.org, openstack@lists.launchpad.net \
(openstack@lists.launchpad.net\)
openstack@lists.launchpad.net
Date: 08/27/2012 03:06 PM
Subject: Re: [Openstack] [openstack-dev] Discussion about where
to
put database for bare-metal provisioning (review 10726)
Sent by: openstack-bounces+mjfork=us.ibm@lists.launchpad.net
   
   
Hi Vish,
   
I think I understand your idea.
One service entry with multiple bare-metal compute_node entries
are
registered at the start of bare-metal nova-compute.
'hypervisor_hostname' must be different for each bare-metal
machine,
such as 'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com',
etc.)
But their IP addresses must be the IP address of bare-metal
nova-
compute, such that an instance is casted
not to bare-metal machine directly but to bare-metal
nova-compute.
  
   I believe the change here is to cast out the message to the
   topic.service-hostname. Existing code sends it to the
   compute_node
   hostname (see line 202 of nova/scheduler/filter_scheduler.py,
   specifically host=weighted_host.host_state.host). Changing that to
   cast to the service hostname would send the message to the
   bare-metal
   proxy node and should not have an effect on current deployments
   since
   the service hostname and the host_state.host would always be
   equal.
   This model will also let you keep the bare-metal compute node IP
   in
   the compute node table.
  
One extension we need to do at the scheduler side is using
(host,
hypervisor_hostname) instead of (host) only in host_manager.py.
'HostManager.service_state' is { host : { service  : { cap k
: v
}}}.
It needs to be changed to { host : { service : {
hypervisor_name : { cap k : v .
Most functions of HostState need to be changed to use (host,
hypervisor_name) pair to identify a compute node.
  
   Would an alternative here be to change the top level host to be
   the
   hypervisor_hostname and enforce uniqueness?
  
Are we on the same page, now?
   
Thanks,
David
   
- Original Message -
 Hi David,

 I just checked out the code more extensively and I don't see
 why
 you
 need to create a new service entry for each compute_node
 entry.
 The
 code in host_manager to get all host states explicitly gets
 all
 compute_node entries. I don't see any reason why multiple
 compute_node
 entries can't share the same service. I don't see any place in
 the
 scheduler that is grabbing records by service instead of by
 compute
 node, but if there is one that I missed, it should be fairly
 easy
 to
 change it.

 The compute_node record is created in the
 compute/resource_tracker.py
 as of a recent commit, so I think the path forward 

[Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-27 Thread Stefano Maffulli
Hello folks

picking up this comment on the Development mailing list:

On Mon 27 Aug 2012 02:08:48 PM PDT, Jason Kölker wrote:
 I've noticed that both this list and the old launchpad lists are being
 used. Which is the correct list?

I sent the following message, with questions at the end that are better
answered on this list.

The mailing list situation *at the moment* is summarized on
http://wiki.openstack.org/MailingLists

To try to answer your question, the mailing list  for the developers of
OpenStack to discuss development issues and roadmap is
openstack-...@lists.openstack.org. It is focused on the
next release of OpenStack: you should post on this list if you are a
contributor to OpenStack or are very familiar with OpenStack
development and want to discuss very precise topics, contribution ideas
and similar. Do not ask support requests on this list.


The old Launchpad list (this list) should be closed so we don't rely on
Launchpad for mailing list anymore. Last time we talked about this I
don't think we reached consensus on how to move things around and where
to land this General mailing list. A few people suggested to use the
exisiting openstack-operators mailing list as General list, therefore
not creating anything new.

Moving a group of over 4000 people from one list on Launchpad to
another on our mailman is scary. Unfortunately we can't export the list
of email addresses subscribed to Launchpad and invite them to another
list (LP doesn't allow that).  The first question is:

* where would people go for general openstack usage questions (is
'operators' the  best fit?)

* Then, what do we do with Launchpad mailing list archives?

If we find an agreement we can aim at closing the old LP-hosted mailing
list around the summit, where we will be able to announce the new list
destination to many people.

Thoughts?

/stef

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM can't ping self floating IP after a snapshot is taken

2012-08-27 Thread Sam Su
Hi,

Thank you so much for your help.
I replaced the file /usr/share/pyshared/nova/virt/libvirt/connection.py
with yours, but  it looks like not worked for me.
Does it need do any additional thing?

Thanks,
Sam

On Sat, Aug 25, 2012 at 7:03 PM, heut2008 heut2...@gmail.com wrote:

 for stable/essex the patach is here
 https://review.openstack.org/#/c/11986/,

 2012/8/25 Sam Su susltd...@gmail.com:
  That's great, thank you for your efforts. Can you make a backport for
 essex?
 
  Sent from my iPhone
 
  On Aug 24, 2012, at 7:15 PM, heut2008 heut2...@gmail.com wrote:
 
  I have fixed it here  https://review.openstack.org/#/c/11925/
 
  2012/8/25 Sam Su susltd...@gmail.com:
  Hi,
 
  I also reported this bug:
  https://bugs.launchpad.net/nova/+bug/1040255
 
  If someone can combine you guys solution and get a perfect way to fix
 this
  bug, that will be great.
 
  BRs,
  Sam
 
 
  On Thu, Aug 23, 2012 at 9:27 PM, heut2008 heut2...@gmail.com wrote:
 
  this bug has been filed here
 https://bugs.launchpad.net/nova/+bug/1040537
 
  2012/8/24 Vishvananda Ishaya vishvana...@gmail.com:
  +1 to this. Evan, can you report a bug (if one hasn't been reported
 yet)
  and
  propose the fix? Or else I can find someone else to propose it.
 
  Vish
 
  On Aug 23, 2012, at 1:38 PM, Evan Callicoat diop...@gmail.com
 wrote:
 
  Hello all!
 
  I'm the original author of the hairpin patch, and things have
 changed a
  little bit in Essex and Folsom from the original Diablo target. I
  believe I
  can shed some light on what should be done here to solve the issue in
  either
  case.
 
  ---
  For Essex (stable/essex), in nova/virt/libvirt/connection.py:
  ---
 
  Currently _enable_hairpin() is only being called from spawn().
 However,
  spawn() is not the only place that vifs (veth#) get added to a bridge
  (which
  is when we need to enable hairpin_mode on them). The more relevant
  function
  is _create_new_domain(), which is called from spawn() and other
 places.
  Without changing the information that gets passed to
  _create_new_domain()
  (which is just 'xml' from to_xml()), we can easily rewrite the first
 2
  lines
  in _enable_hairpin(), as follows:
 
  def _enable_hairpin(self, xml):
 interfaces = self.get_interfaces(xml['name'])
 
  Then, we can move the self._enable_hairpin(instance) call from
 spawn()
  up
  into _create_new_domain(), and pass it xml as follows:
 
  [...]
  self._enable_hairpin(xml)
  return domain
 
  This will run the hairpin code every time a domain gets created,
 which
  is
  also when the domain's vif(s) gets inserted into the bridge with the
  default
  of hairpin_mode=0.
 
  ---
  For Folsom (trunk), in nova/virt/libvirt/driver.py:
  ---
 
  There've been a lot more changes made here, but the same strategy as
  above
  should work. Here, _create_new_domain() has been split into
  _create_domain()
  and _create_domain_and_network(), and _enable_hairpin() was moved
 from
  spawn() to _create_domain_and_network(), which seems like it'd be the
  right
  thing to do, but doesn't quite cover all of the cases of vif
  reinsertion,
  since _create_domain() is the only function which actually creates
 the
  domain (_create_domain_and_network() just calls it after doing some
  pre-work). The solution here is likewise fairly simple; make the
 same 2
  changes to _enable_hairpin():
 
  def _enable_hairpin(self, xml):
 interfaces = self.get_interfaces(xml['name'])
 
  And move it from _create_domain_and_network() to _create_domain(),
 like
  before:
 
  [...]
  self._enable_hairpin(xml)
  return domain
 
  I haven't yet tested this on my Essex clusters and I don't have a
 Folsom
  cluster handy at present, but the change is simple and makes sense.
  Looking
  at to_xml() and _prepare_xml_info(), it appears that the 'xml'
 variable
  _create_[new_]domain() gets is just a python dictionary, and
 xml['name']
  =
  instance['name'], exactly what _enable_hairpin() was using the
  'instance'
  variable for previously.
 
  Let me know if this works, or doesn't work, or doesn't make sense,
 or if
  you
  need an address to send gifts, etc. Hope it's solved!
 
  -Evan
 
  On Thu, Aug 23, 2012 at 11:20 AM, Sam Su susltd...@gmail.com
 wrote:
 
  Hi Oleg,
 
  Thank you for your investigation. Good lucky!
 
  Can you let me know if find how to fix the bug?
 
  Thanks,
  Sam
 
  On Wed, Aug 22, 2012 at 12:50 PM, Oleg Gelbukh 
 ogelb...@mirantis.com
  wrote:
 
  Hello,
 
  Is it possible that, during snapshotting, libvirt just tears down
  virtual
  interface at some point, and then re-creates it, with hairpin_mode
  disabled
  again?
  This bugfix [https://bugs.launchpad.net/nova/+bug/933640] implies
 that
  fix works on spawn of instance. This means that upon resume after
  snapshot,
  hairpin is not restored. May be if we insert the _enable_hairpin()
  call in
  snapshot procedure, it helps.
  We're currently investigating this issue in one of our
 environments,
  hope
  to come up with answer by tomorrow.
 
 

Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-27 Thread Brian Schott
Stef,

It's pretty obvious to me that there should be a general list at 
openst...@lists.openstack.org.  The operators list is intended for operations 
people that host OpenStack deployments, not a general OpenStack user audience.  
I'd create the general openstack list, and setup a daily post to the LP list 
stating that the LP list will shut down on the last day of the summit along 
with descriptions and links to the foundation mailing lists and their purpose. 
Make the exact same info available on the wiki and in the docs.  The sooner we 
end the right list ambiguity the better.

In terms of the old archives, can you export the old LP-hosted mailing list 
archives?  If so, the mailman archive file format is brain dead simple and a 
grad student somewhere could perl script it (or python it or ruby it or 
whatever they use these days) in an hour or so.  If not, it is OK to just link 
the old archives in the description of the new lists.

Brian


On Aug 27, 2012, at 6:54 PM, Stefano Maffulli stef...@openstack.org wrote:

 Hello folks
 
 picking up this comment on the Development mailing list:
 
 On Mon 27 Aug 2012 02:08:48 PM PDT, Jason Kölker wrote:
 I've noticed that both this list and the old launchpad lists are being
 used. Which is the correct list?
 
 I sent the following message, with questions at the end that are better
 answered on this list.
 
 The mailing list situation *at the moment* is summarized on
 http://wiki.openstack.org/MailingLists
 
 To try to answer your question, the mailing list  for the developers of
 OpenStack to discuss development issues and roadmap is
 openstack-...@lists.openstack.org. It is focused on the
 next release of OpenStack: you should post on this list if you are a
 contributor to OpenStack or are very familiar with OpenStack
 development and want to discuss very precise topics, contribution ideas
 and similar. Do not ask support requests on this list.
 
 
 The old Launchpad list (this list) should be closed so we don't rely on
 Launchpad for mailing list anymore. Last time we talked about this I
 don't think we reached consensus on how to move things around and where
 to land this General mailing list. A few people suggested to use the
 exisiting openstack-operators mailing list as General list, therefore
 not creating anything new.
 
 Moving a group of over 4000 people from one list on Launchpad to
 another on our mailman is scary. Unfortunately we can't export the list
 of email addresses subscribed to Launchpad and invite them to another
 list (LP doesn't allow that).  The first question is:
 
 * where would people go for general openstack usage questions (is
 'operators' the  best fit?)
 
 * Then, what do we do with Launchpad mailing list archives?
 
 If we find an agreement we can aim at closing the old LP-hosted mailing
 list around the summit, where we will be able to announce the new list
 destination to many people.
 
 Thoughts?
 
 /stef
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] HELP: All instances automatically rebooted.

2012-08-27 Thread Sam Su
Hi,

I have an Essex cluster with 6 compute nodes and one control nodes. All
compute nodes are working not any interrupted, for some reason all
instances in my cluster automatically rebooted. I am trying to but not
figured out why this happened in these couple of days.

It's much appreciated if someone can give me some hints about how to deal
with this situation.

Logs in my /var/log/upstart/nova-compute.log:
http://pastebin.com/WYJtS5a5

Let me know if need more info.

TIA,
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HELP: All instances automatically rebooted.

2012-08-27 Thread Gabe Westmaas
Hey Sam,

Is it possible your hypervisors restarted?  I see this entry in the logs:

2012-08-23 06:35:02 INFO nova.compute.manager 
[req-f1598257-3f35-40e6-b5aa-d47a0e93bfba None None] [instance: 
ce00ff1d-cf46-44de-9557-c5a0f91c8d67] Rebooting instance after nova-compute 
restart.

Gabe

From: openstack-bounces+gabe.westmaas=rackspace@lists.launchpad.net 
[mailto:openstack-bounces+gabe.westmaas=rackspace@lists.launchpad.net] On 
Behalf Of Sam Su
Sent: Monday, August 27, 2012 8:10 PM
To: openstack
Subject: [Openstack] HELP: All instances automatically rebooted.

Hi,

I have an Essex cluster with 6 compute nodes and one control nodes. All compute 
nodes are working not any interrupted, for some reason all instances in my 
cluster automatically rebooted. I am trying to but not figured out why this 
happened in these couple of days.

It's much appreciated if someone can give me some hints about how to deal with 
this situation.

Logs in my /var/log/upstart/nova-compute.log:
http://pastebin.com/WYJtS5a5

Let me know if need more info.

TIA,
Sam

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-27 Thread Andrew Clay Shafer
There are at least end users, operators and developers in the OpenStack
technical ecosystem.

There are also 'OpenStack' related discussions that aren't technical in
nature.

It doesn't seem right to make the operators list a catchall.

For exporting from Launchpad, surely someone at Canonical would be able and
willing to get that list of emails.

If people think migrating the archive is important, then it shouldn't be
that hard to sort that out either, once we decide what is acceptable.

regards,
Andrew


On Mon, Aug 27, 2012 at 7:38 PM, Brian Schott 
brian.sch...@nimbisservices.com wrote:

 Stef,

 It's pretty obvious to me that there should be a general list at
 openst...@lists.openstack.org.  The operators list is intended for
 operations people that host OpenStack deployments, not a general OpenStack
 user audience.  I'd create the general openstack list, and setup a daily
 post to the LP list stating that the LP list will shut down on the last day
 of the summit along with descriptions and links to the foundation mailing
 lists and their purpose. Make the exact same info available on the wiki and
 in the docs.  The sooner we end the right list ambiguity the better.

 In terms of the old archives, can you export the old LP-hosted mailing
 list archives?  If so, the mailman archive file format is brain dead simple
 and a grad student somewhere could perl script it (or python it or ruby it
 or whatever they use these days) in an hour or so.  If not, it is OK to
 just link the old archives in the description of the new lists.

 Brian


 On Aug 27, 2012, at 6:54 PM, Stefano Maffulli stef...@openstack.org
 wrote:

  Hello folks
 
  picking up this comment on the Development mailing list:
 
  On Mon 27 Aug 2012 02:08:48 PM PDT, Jason Kölker wrote:
  I've noticed that both this list and the old launchpad lists are being
  used. Which is the correct list?
 
  I sent the following message, with questions at the end that are better
  answered on this list.
 
  The mailing list situation *at the moment* is summarized on
  http://wiki.openstack.org/MailingLists
 
  To try to answer your question, the mailing list  for the developers of
  OpenStack to discuss development issues and roadmap is
  openstack-...@lists.openstack.org. It is focused on the
  next release of OpenStack: you should post on this list if you are a
  contributor to OpenStack or are very familiar with OpenStack
  development and want to discuss very precise topics, contribution ideas
  and similar. Do not ask support requests on this list.
 
 
  The old Launchpad list (this list) should be closed so we don't rely on
  Launchpad for mailing list anymore. Last time we talked about this I
  don't think we reached consensus on how to move things around and where
  to land this General mailing list. A few people suggested to use the
  exisiting openstack-operators mailing list as General list, therefore
  not creating anything new.
 
  Moving a group of over 4000 people from one list on Launchpad to
  another on our mailman is scary. Unfortunately we can't export the list
  of email addresses subscribed to Launchpad and invite them to another
  list (LP doesn't allow that).  The first question is:
 
  * where would people go for general openstack usage questions (is
  'operators' the  best fit?)
 
  * Then, what do we do with Launchpad mailing list archives?
 
  If we find an agreement we can aim at closing the old LP-hosted mailing
  list around the summit, where we will be able to announce the new list
  destination to many people.
 
  Thoughts?
 
  /stef
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread Michael J Fork

VTJ NOTSU Arata no...@virtualtech.jp wrote on 08/27/2012 07:30:40 PM:

 From: VTJ NOTSU Arata no...@virtualtech.jp
 To: Michael J Fork/Rochester/IBM@IBMUS,
 Cc: David Kang dk...@isi.edu, openstack@lists.launchpad.net
 (openstack@lists.launchpad.net) openstack@lists.launchpad.net,
 openstack-bounces+mjfork=us.ibm@lists.launchpad.net, OpenStack
 Development Mailing List openstack-...@lists.openstack.org
 Date: 08/27/2012 07:30 PM
 Subject: Re: [Openstack] [openstack-dev] Discussion about where to
 put database for bare-metal provisioning (review 10726)

 Hi Michael,

  Looking at line 203 in nova/scheduler/filter_scheduler.py, the
 target host in the cast call is weighted_host*.*host_state*.*host
 and not a service host. (My guess is this will likely require a fair
 number of changes in the scheduler area to change cast calls to
 target a service host instead of a compute node)

 weighted_host.host_state.host still seems to be service['host']...
 Please look at it again with me.

 # First, HostStateManager.get_all_host_states:
 # host_manager.py:264
  compute_nodes = db.compute_node_get_all(context)
  for compute in compute_nodes:
 # service is from services table (joined-loaded with compute_nodes)
  service = compute['service']
  if not service:
  LOG.warn(_(No service for compute ID %s) % compute
['id'])
  continue
  host = service['host']
  capabilities = self.service_states.get(host, None)
 # go to HostState constructor:
 # the 1st parameter 'host' is service['host']
  host_state = self.host_state_cls(host, topic,
  capabilities=capabilities,
  service=dict(service.iteritems()))

 # host_manager.py:101
  def __init__(self, host, topic, capabilities=None, service=None):
  self.host = host
  self.topic = topic
 # here, HostState.host is service['host']

 Then, update_from_compute_node(compute) is called but it leaves
 self.host unchanged.
 WeightedHost.host_state is this HostState. So, host at
 filter_scheduler.py:203 is service['host']. We can use existing code
 about RPC target. Do I miss something?

Agreed, you can use the existing RPC target.  Sorry for the confusion.
This actually answers the question in David's last e-mail asking if the
host field can be used from the services table - it already is.

 Thanks,
 Arata



BIG SNIP


Michael

-
Michael Fork
Cloud Architect, Emerging Solutions
IBM Systems  Technology Group___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HELP: All instances automatically rebooted.

2012-08-27 Thread Alejandro Comisario
One of the things i dont like in essex.
That the autostart flag in nova.conf with KVM doesnt work with the
autostart feature of libvirt/kvm, so if, for some reason you need to
restart nova-compute to apply some kind of modification, the instances get
soft/hard rebooted because now nova-compute handles the autostart flag
from nova.conf

Why is that if some one can explain ?

On Mon, Aug 27, 2012 at 9:21 PM, Gabe Westmaas
gabe.westm...@rackspace.comwrote:

  Hey Sam,

 ** **

 Is it possible your hypervisors restarted?  I see this entry in the logs:*
 ***

 ** **

 2012-08-23 06:35:02 INFO nova.compute.manager
 [req-f1598257-3f35-40e6-b5aa-d47a0e93bfba None None] [instance:
 ce00ff1d-cf46-44de-9557-c5a0f91c8d67] Rebooting instance after nova-compute
 restart.

 ** **

 Gabe

 ** **

 *From:* 
 openstack-bounces+gabe.westmaas=rackspace@lists.launchpad.net[mailto:
 openstack-bounces+gabe.westmaas=rackspace@lists.launchpad.net] *On
 Behalf Of *Sam Su
 *Sent:* Monday, August 27, 2012 8:10 PM
 *To:* openstack
 *Subject:* [Openstack] HELP: All instances automatically rebooted.

 ** **

 Hi, 

 ** **

 I have an Essex cluster with 6 compute nodes and one control nodes. All
 compute nodes are working not any interrupted, for some reason all
 instances in my cluster automatically rebooted. I am trying to but not
 figured out why this happened in these couple of days. 

 ** **

 It's much appreciated if someone can give me some hints about how to deal
 with this situation.

 ** **

 Logs in my /var/log/upstart/nova-compute.log:

 http://pastebin.com/WYJtS5a5 

 ** **

 Let me know if need more info.


 TIA,

 Sam

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
*Alejandro*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HELP: All instances automatically rebooted.

2012-08-27 Thread Sam Su
Thanks for your guys help!
I guess this problem may be caused by nova packages auto upgrade.

I just found these two lines in the file /var/log/kern.log:
Aug 23 06:34:33 cnode-01 kernel: [4955691.256036] init: nova-network main
process (9191) terminated with status 143
Aug 23 06:34:35 cnode-01 kernel: [4955693.402082] init: nova-compute main
process (9275) terminated with status 143

Here is the link for detail of /var/log/kern.log
http://pastebin.com/GqH1ju1R

Also found these info in /var/log/dpkg.log:
2012-08-23 06:34:33 upgrade nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012.1+stable~20120612-3ee026e-0ubuntu1.3
2012-08-23 06:34:33 status half-configured nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:33 status unpacked nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:33 status half-installed nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:34 status triggers-pending ureadahead 0.100.0-12
2012-08-23 06:34:34 status half-installed nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:34 status triggers-pending ureadahead 0.100.0-12
2012-08-23 06:34:34 status triggers-pending man-db 2.6.1-2
2012-08-23 06:34:34 status half-installed nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:34 status half-installed nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:34 status unpacked nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.3
2012-08-23 06:34:34 status unpacked nova-network
2012.1+stable~20120612-3ee026e-0ubuntu1.3
2012-08-23 06:34:34 upgrade nova-compute-kvm
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012.1+stable~20120612-3ee026e-0ubuntu1.3
2012-08-23 06:34:34 status half-configured nova-compute-kvm
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:35 status unpacked nova-compute-kvm
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:35 status half-installed nova-compute-kvm
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:35 status half-installed nova-compute-kvm
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012-08-23 06:34:35 status unpacked nova-compute-kvm
2012.1+stable~20120612-3ee026e-0ubuntu1.3
2012-08-23 06:34:35 status unpacked nova-compute-kvm
2012.1+stable~20120612-3ee026e-0ubuntu1.3
2012-08-23 06:34:35 upgrade nova-compute
2012.1+stable~20120612-3ee026e-0ubuntu1.2
2012.1+stable~20120612-3ee026e-0ubuntu1.3

Here is detail:
http://pastebin.com/juiSxCue

But I am not 100% sure of this. Does anyone knows what it means that a
process was terminated with status 143 in Ubuntu 12.04?

Thanks ahead,
Sam


On Mon, Aug 27, 2012 at 5:56 PM, Alejandro Comisario 
alejandro.comisa...@mercadolibre.com wrote:

 One of the things i dont like in essex.
 That the autostart flag in nova.conf with KVM doesnt work with the
 autostart feature of libvirt/kvm, so if, for some reason you need to
 restart nova-compute to apply some kind of modification, the instances get
 soft/hard rebooted because now nova-compute handles the autostart flag
 from nova.conf

 Why is that if some one can explain ?

 On Mon, Aug 27, 2012 at 9:21 PM, Gabe Westmaas 
 gabe.westm...@rackspace.com wrote:

  Hey Sam,

 ** **

 Is it possible your hypervisors restarted?  I see this entry in the logs:
 

 ** **

 2012-08-23 06:35:02 INFO nova.compute.manager
 [req-f1598257-3f35-40e6-b5aa-d47a0e93bfba None None] [instance:
 ce00ff1d-cf46-44de-9557-c5a0f91c8d67] Rebooting instance after nova-compute
 restart.

 ** **

 Gabe

 ** **

 *From:* 
 openstack-bounces+gabe.westmaas=rackspace@lists.launchpad.net[mailto:
 openstack-bounces+gabe.westmaas=rackspace@lists.launchpad.net] *On
 Behalf Of *Sam Su
 *Sent:* Monday, August 27, 2012 8:10 PM
 *To:* openstack
 *Subject:* [Openstack] HELP: All instances automatically rebooted.

 ** **

 Hi, 

 ** **

 I have an Essex cluster with 6 compute nodes and one control nodes. All
 compute nodes are working not any interrupted, for some reason all
 instances in my cluster automatically rebooted. I am trying to but not
 figured out why this happened in these couple of days. 

 ** **

 It's much appreciated if someone can give me some hints about how to deal
 with this situation.

 ** **

 Logs in my /var/log/upstart/nova-compute.log:

 http://pastebin.com/WYJtS5a5 

 ** **

 Let me know if need more info.


 TIA,

 Sam

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --
 *Alejandro*

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp