Re: [Openstack] unexpected distribution of compute instances in queens

2018-12-03 Thread Jay Pipes

On 11/30/2018 05:52 PM, Mike Carden wrote:


Have you set the placement_randomize_allocation_candidates CONF option
and are still seeing the packing behaviour?


No I haven't. Where would be the place to do that? In a nova.conf 
somewhere that the nova-scheduler containers on the controller hosts 
could pick it up?


Just about to deploy for realz with about forty x86 compute nodes, so it 
would be really nice to sort this first. :)


Presuming you are deploying Rocky or Queens,

It goes in the nova.conf file under the [placement] section:

randomize_allocation_candidates = true

The nova.conf file should be the one used by nova-scheduler.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] unexpected distribution of compute instances in queens

2018-11-30 Thread Jay Pipes

On 11/30/2018 02:53 AM, Mike Carden wrote:

I'm seeing a similar issue in Queens deployed via tripleo.

Two x86 compute nodes and one ppc64le node and host aggregates for 
virtual instances and baremetal (x86) instances. Baremetal on x86 is 
working fine.


All VMs get deployed to compute-0. I can live migrate VMs to compute-1 
and all is well, but I tire of being the 'meatspace scheduler'.


LOL, I love that term and will have to remember to use it in the future.

I've looked at the nova.conf in the various nova-xxx containers on the 
controllers, but I have failed to discern the root of this issue.


Have you set the placement_randomize_allocation_candidates CONF option 
and are still seeing the packing behaviour?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] unexpected distribution of compute instances in queens

2018-11-28 Thread Jay Pipes

On 11/28/2018 02:50 AM, Zufar Dhiyaulhaq wrote:

Hi,

Thank you. I am able to fix this issue by adding this configuration into 
nova configuration file in controller node.


driver=filter_scheduler


That's the default:

https://docs.openstack.org/ocata/config-reference/compute/config-options.html

So that was definitely not the solution to your problem.

My guess is that Sean's suggestion to randomize the allocation 
candidates fixed your issue.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [Openstack] VMs cannot fetch metadata

2018-11-06 Thread Jay Pipes

https://bugs.launchpad.net/neutron/+bug/1777640

Best,
-jay

On 11/06/2018 08:21 AM, Terry Lundin wrote:

Hi all,

I've been struggling with instances suddenly not being able to fetch 
metadata from Openstack Queens (this has worked fine earlier).


Newly created VMs fail to connect to the magic ip, eg. 
http://169.254.169.254/, and won't initialize properly. Subsequently ssh 
login will fail since no key is uploaded.


The symptom is failed requests in the log

*Cirros:*
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending select for 10.0.0.18...
Lease of 10.0.0.18 obtained, lease time 86400
route: SIOCADDRT: File exists
WARN: failed: route add -net "0.0.0.0/0" gw "10.0.0.1"
cirros-ds 'net' up at 0.94
checkinghttp://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 0.94. request failed
failed 2/20: up 3.01. request failed
failed 3/20: up 5.03. request failed
failed 4/20: up 7.04. request failed

*..and on Centos6:*
ci-info: | Route |   Destination   | Gateway  | Genmask | Interface | 
Flags |
ci-info: 
+---+-+--+-+---+---+
ci-info: |   0   | 169.254.169.254 | 10.0.0.1 | 255.255.255.255 |eth0   |  
UGH  |
ci-info: |   1   | 10.0.0.0| 0.0.0.0  |  255.255.255.0  |eth0   |   
U   |
ci-info: |   2   | 0.0.0.0 | 10.0.0.1 | 0.0.0.0 |eth0   |   
UG  |
ci-info: 
+---+-+--+-+---+---+
2018-11-06 08:10:07,892 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: 
unexpected error ['NoneType' object has no attribute 'status_code']
2018-11-06 08:10:08,906 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: 
unexpected error ['NoneType' object has no attribute 'status_code']
2018-11-06 08:10:09,925 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: 
unexpected error ['NoneType' object has no attribute
...

Using Curl manually, eg. '/curl http://169.254.169.254/openstack/' one 
gets:


/curl: (52) Empty reply from server/

*At the same time this error is showing up in the syslog on the controller:*

Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 460, 
in fire_timers

Nov  6 12:51:01 controller neutron-metadata-agent[3094]: timer()
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
59, in __call__

Nov  6 12:51:01 controller neutron-metadata-agent[3094]: cb(*args, **kw)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
219, in main
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: result = 
function(*args, **kwargs)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 793, in 
process_request
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: 
proto.__init__(conn_state, self)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: TypeError: 
__init__() takes exactly 4 arguments (3 given)


*Neither rebooting the controller, reinstalling neutron, or restarting 
the services will do anything top fix this.*


Has anyone else seen this? We are using Queens with a single controller.

Kind Regards

Terje Lundin






___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [diskimage-builder] Element pip-and-virtualenv failed to install pip

2018-10-08 Thread Jay Pipes

On 09/07/2018 03:46 PM, Hang Yang wrote:

Hi there,

I'm new to the DIB tool and ran into an issue when used 2.16.0 DIB tool 
to build a CentOS based image with pip-and-virtualenv element. It failed 
at 
https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/pip-and-virtualenv/install.d/pip-and-virtualenv-source-install/04-install-pip#L78 
due to cannot find pip command.


I found the /tmp/get_pip.py was there but totally empty. I have to 
manually add a wget step to retreat the get_pip.py right before the 
failed step then it worked. But should the get_pip.py be downloaded 
automatically by this 
https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/pip-and-virtualenv/source-repository-pip-and-virtualenv 
? Does anyone know how could this issue happen? Thanks in advance for 
any help.


Hi Hang,

Are you using a package or a source-based installation for your dib? The 
reason I ask is because from the docs it seems that the installation 
procedure for pip is quite different depending on whether you're using a 
package or source-based install.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova][cinder] Migrate instances between regions or between clusters?

2018-09-17 Thread Jay Pipes

On 09/17/2018 09:39 AM, Peter Penchev wrote:

Hi,

So here's a possibly stupid question - or rather, a series of such :)
Let's say a company has two (or five, or a hundred) datacenters in
geographically different locations and wants to deploy OpenStack in both.
What would be a deployment scenario that would allow relatively easy
migration (cold, not live) of instances from one datacenter to another?

My understanding is that for servers located far away from one another
regions would be a better metaphor than availability zones, if only
because it would be faster for the various storage, compute, etc.
services to communicate with each other for the common case of doing
actions within the same datacenter.  Is this understanding wrong - is it
considered all right for groups of servers located in far away places to
be treated as different availability zones in the same cluster?

If the groups of servers are put in different regions, though, this
brings me to the real question: how can an instance be migrated across
regions?  Note that the instance will almost certainly have some
shared-storage volume attached, and assume (not quite the common case,
but still) that the underlying shared storage technology can be taught
about another storage cluster in another location and can transfer
volumes and snapshots to remote clusters.  From what I've found, there
are three basic ways:

- do it pretty much by hand: create snapshots of the volumes used in
   the underlying storage system, transfer them to the other storage
   cluster, then tell the Cinder volume driver to manage them, and spawn
   an instance with the newly-managed newly-transferred volumes


Yes, this is a perfectly reasonable solution. In fact, when I was at 
AT&T, this was basically how we allowed tenants to spin up instances in 
multiple regions: snapshot the instance, it gets stored in the Swift 
storage for the region, tenant starts the instance in a different 
region, and Nova pulls the image from the Swift storage in the other 
region. It's slow the first time it's launched in the new region, of 
course, since the bits need to be pulled from the other region's Swift 
storage, but after that, local image caching speeds things up quite a bit.


This isn't migration, though. Namely, the tenant doesn't keep their 
instance ID, their instance's IP addresses, or anything like that.


I've heard some users care about that stuff, unfortunately, which is why 
we have shelve [offload]. There's absolutely no way to perform a 
cross-region migration that keeps the instance ID and instance IP addresses.



- use Cinder to backup the volumes from one region, then restore them to
   the other; if this is combined with a storage-specific Cinder backup
   driver that knows that "backing up" is "creating a snapshot" and
   "restoring to the other region" is "transferring that snapshot to the
   remote storage cluster", it seems to be the easiest way forward (once
   the Cinder backup driver has been written)


Still won't have the same instance ID and IP address, which is what 
certain users tend to complain about needing with move operations.



- use Nova's "server image create" command, transfer the resulting
   Glance image somehow (possibly by downloading it from the Glance
   storage in one region and simulateneously uploading it to the Glance
   instance in the other), then spawn an instance off that image


Still won't have the same instance ID and IP address :)

Best,
-jay


The "server image create" approach seems to be the simplest one,
although it is a bit hard to imagine how it would work without
transferring data unnecessarily (the online articles I've seen
advocating it seem to imply that a Nova instance in a region cannot be
spawned off a Glance image in another region, so there will need to be
at least one set of "download the image and upload it to the other
side", even if the volume-to-image and image-to-volume transfers are
instantaneous, e.g. using glance-cinderclient).  However, when I tried
it with a Nova instance backed by a StorPool volume (no ephemeral image
at all), the Glance image was zero bytes in length and only its metadata
contained some information about a volume snapshot created at that
point, so this seems once again to go back to options 1 and 2 for the
different ways to transfer a Cinder volume or snapshot to the other
region.  Or have I missed something, is there a way to get the "server
image create / image download / image create" route to handle volumes
attached to the instance?

So... have I missed something else, too, or are these the options for
transferring a Nova instance between two distant locations?

Thanks for reading this far, and thanks in advance for your help!

Best regards,
Peter



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/list

Re: [Openstack] [nova] Nova-scheduler: when are filters applied?

2018-09-03 Thread Jay Pipes

On 09/03/2018 07:27 AM, Eugen Block wrote:

Hi,

To echo what cfriesen said, if you set your allocation ratio to 1.0, 
the system will not overcommit memory. Shut down instances consume 
memory from an inventory management perspective. If you don't want any 
danger of an instance causing an OOM, you must set you 
ram_allocation_ratio to 1.0.


let's forget about the scheduler, I'll try to make my question a bit 
clearer.


Let's say I have a ratio of 1.0 on my hypervisor, and let it have 24 GB 
of RAM available, ignoring the OS for a moment. Now I launch 6 
instances, each with a flavor requesting 4 GB of RAM, that would leave 
no space for further instances, right?
Then I shutdown two instances (freeing 8 GB RAM) and create a new one 
with 8 GB of RAM, the compute node is full again (assuming all instances 
actually consume all of their RAM).
Now I boot one of the shutdown instances again, the compute node would 
require additional 4 GB of RAM for that instance, and this would lead to 
OOM, isn't that correct? So a ratio of 1.0 would not prevent that from 
happening, would it?


I'm not entirely sure what you mean by "shut down an instance". Perhaps 
this is what is leading to confusion. I consider "shutting down an 
instance" to be stopping or suspending an instance.


As I mentioned below, shutdown instances consume memory from an 
inventory management perspective. If you stop or suspend an instance on 
your host, that instance is still consuming the same amount of memory in 
the placement service. You will *not* be able to launch a new instance 
on that same compute host *unless* your allocation ratio is >1.0.


Now, if by "shut down an instance", you actually mean "terminate an 
instance" or possibly "shelve and then offload an instance", then that 
is a different thing, and in both of *those* cases, resources are 
released on the compute host.


Best,
-jay


Zitat von Jay Pipes :


On 08/30/2018 10:54 AM, Eugen Block wrote:

Hi Jay,

You need to set your ram_allocation_ratio nova.CONF option to 1.0 if 
you're running into OOM issues. This will prevent overcommit of 
memory on your compute nodes.


I understand that, the overcommitment works quite well most of the time.

It just has been an issue twice when I booted an instance that had 
been shutdown a while ago. In the meantime there were new instances 
created on that hypervisor, and this old instance caused the OOM.


I would expect that with a ratio of 1.0 I would experience the same 
issue, wouldn't I? As far as I understand the scheduler only checks 
at instance creation, not when booting existing instances. Is that a 
correct assumption?


To echo what cfriesen said, if you set your allocation ratio to 1.0, 
the system will not overcommit memory. Shut down instances consume 
memory from an inventory management perspective. If you don't want any 
danger of an instance causing an OOM, you must set you 
ram_allocation_ratio to 1.0.


The scheduler doesn't really have anything to do with this.

Best,
-jay

___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openstack@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] The problem of how to update resouce allocation ratio dynamically.

2018-08-31 Thread Jay Pipes

On 08/23/2018 11:01 PM, 余婷婷 wrote:

Hi:
    Sorry fo bothering everyone. Now I update my openstack to queen,and 
use the nova-placement-api to provider resource.
   When I use "/resource_providers/{uuid}/inventories/MEMORY_MB" to 
update memory_mb allocation_ratio, and it success.But after some 
minutes,it recove to old value automatically. Then I find it report the 
value from compute_node in nova-compute automatically. But the 
allocation_ratio of compute_node was came from the nova.conf.So that 
means,We can't update the allocation_ratio until we update the 
nova.conf? But I wish to update the allocation_ratio dynamically other 
to update the nova.conf. I don't known how  to update resouce allocation 
ratio dynamically.


We are attempting to determine what is going on with the allocation 
ratios being improperly set on the following bug:


https://bugs.launchpad.net/nova/+bug/1789654

Please bear with us as we try to fix it.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Nova-scheduler: when are filters applied?

2018-08-30 Thread Jay Pipes

On 08/30/2018 10:54 AM, Eugen Block wrote:

Hi Jay,

You need to set your ram_allocation_ratio nova.CONF option to 1.0 if 
you're running into OOM issues. This will prevent overcommit of memory 
on your compute nodes.


I understand that, the overcommitment works quite well most of the time.

It just has been an issue twice when I booted an instance that had been 
shutdown a while ago. In the meantime there were new instances created 
on that hypervisor, and this old instance caused the OOM.


I would expect that with a ratio of 1.0 I would experience the same 
issue, wouldn't I? As far as I understand the scheduler only checks at 
instance creation, not when booting existing instances. Is that a 
correct assumption?


To echo what cfriesen said, if you set your allocation ratio to 1.0, the 
system will not overcommit memory. Shut down instances consume memory 
from an inventory management perspective. If you don't want any danger 
of an instance causing an OOM, you must set you ram_allocation_ratio to 1.0.


The scheduler doesn't really have anything to do with this.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova]

2018-08-30 Thread Jay Pipes

On 08/30/2018 10:19 AM, Eugen Block wrote:

When does Nova apply its filters (Ram, CPU, etc.)?
Of course at instance creation and (live-)migration of existing 
instances. But what about existing instances that have been shutdown and 
in the meantime more instances on the same hypervisor have been launched?


When you start one of the pre-existing instances and even with RAM 
overcommitment you can end up with an OOM-Killer resulting in forceful 
shutdowns if you reach the limits. Is there something I've been missing 
or maybe a bad configuration of my scheduler filters? Or is it the 
admin's task to keep an eye on the load?


I'd appreciate any insights or pointers to something I've missed.


You need to set your ram_allocation_ratio nova.CONF option to 1.0 if 
you're running into OOM issues. This will prevent overcommit of memory 
on your compute nodes.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova]Capacity discrepancy between command line and MySQL query

2018-08-28 Thread Jay Pipes

On 08/27/2018 09:40 AM, Risto Vaaraniemi wrote:

Hi,

I tried to migrate a guest to another host but it failed with a
message saying there's not enough capacity on the target host even
though the server should me nearly empty. The guest I'm trying to
move needs 4 cores, 4 GB of memory and 50 GB of disk. Each compute
node should have 20 cores, 128 GB RAM & 260 GB HD space.

When I check it with "openstack host show compute1" I see that there's
plenty of free resources. However, when I check it directly in MariaDB
nova_api or using Placement API calls I see different results i.e. not
enough cores & disk.

Is there a safe way to make the different registries / databases to
match? Can I just overwrite it using the Placement API?

I'm using Pike.

BR,
Risto

PS
I did make a few attempts to resize the guest that now runs on
compute1 but for some reason they failed and by default the resize
tries to restart the resized guest on a different host (compute1).
In the end I was able to do the resize on the same host (compute2).
I was wondering if the resize attempts messed up the compute1 resource
management.


Very likely, yes.

It's tough to say what exact sequence of resize and migrate commands 
have caused your inventory and allocation records in placement to become 
corrupted.


Have you tried restarting the nova-compute services on both compute 
nodes and seeing whether the placement service tries to adjust 
allocations upon restart?


Also, please check the logs on the nova-compute workers looking for any 
warnings or errors related to communication with placement.


Best,
-jay


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Log files on exceeding cpu allocation limit

2018-08-08 Thread Jay Pipes
eduler/manager.py", line
139, in select_destinations
2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise
exception.NoValidHost(reason="")
2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost:
No valid host was found.
2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager
2018-08-08 09:28:36.328 1648 WARNING nova.scheduler.utils
[req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 -
default default] Failed to compute_task_build_instances: No valid host
was found.
Traceback (most recent call last):

   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
line 226, in inner
 return func(*args, **kwargs)

   File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py",
line 139, in select_destinations
 raise exception.NoValidHost(reason="")

NoValidHost: No valid host was found.
: NoValidHost_Remote: No valid host was found.
2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils
[req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 -
default default] [instance: b466a974-06ba-459b-bc04-2ccb2b3ee720]
Setting instance to ERROR state.: NoValidHost_Remote: No valid host
was found.
### END ###
On Wed, Aug 8, 2018 at 9:45 AM Jay Pipes  wrote:


On 08/08/2018 09:37 AM, Cody wrote:

On 08/08/2018 07:19 AM, Bernd Bausch wrote:

I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?

The message is in the conductor log because it's the conductor that does
most of the work. The others are just slackers (like nova-api) or wait
for instructions from the conductor.

The above is my guess, of course, but IMHO a very educated one.

Bernd.


Thank you, Bernd. I didn't know the inner workflow in this case.
Initially, I thought it was for the scheduler to discover that no more
resource was left available, hence I expected to see something from
the scheduler log. My understanding now is that the quota get checked
in the database prior to the deployment. That would explain why the
clue was in the nova-conductor.log, not the nova-scheduler.log.


Quota is checked in the nova-api node, not the nova-conductor.

As I said in my previous message, unless you paste what the logs are
that you are referring to, it's not possible to know what you are
referring to.

Best,
-jay


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Log files on exceeding cpu allocation limit

2018-08-08 Thread Jay Pipes

On 08/08/2018 09:37 AM, Cody wrote:

On 08/08/2018 07:19 AM, Bernd Bausch wrote:

I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?

The message is in the conductor log because it's the conductor that does
most of the work. The others are just slackers (like nova-api) or wait
for instructions from the conductor.

The above is my guess, of course, but IMHO a very educated one.

Bernd.


Thank you, Bernd. I didn't know the inner workflow in this case.
Initially, I thought it was for the scheduler to discover that no more
resource was left available, hence I expected to see something from
the scheduler log. My understanding now is that the quota get checked
in the database prior to the deployment. That would explain why the
clue was in the nova-conductor.log, not the nova-scheduler.log.


Quota is checked in the nova-api node, not the nova-conductor.

As I said in my previous message, unless you paste what the logs are 
that you are referring to, it's not possible to know what you are 
referring to.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Log files on exceeding cpu allocation limit

2018-08-08 Thread Jay Pipes

On 08/08/2018 07:19 AM, Bernd Bausch wrote:

I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?

The message is in the conductor log because it's the conductor that does
most of the work. The others are just slackers (like nova-api) or wait
for instructions from the conductor.

The above is my guess, of course, but IMHO a very educated one.

Bernd.

On 8/8/2018 1:35 AM, Cody wrote:

Hi Jay,

Thank you for getting back to my question.

I agree that it is not an error; only a preset limit is reached. I
just wonder why this incident only got recorded in the
nova-conductor.log, but not in other files such as nova-scheduler.log,
which would make more sense to me. :-)


I gave up trying to answer this because the original poster did not 
include any information about an "error" in either the original post [1] 
or his reply.


So I have no idea what got recorded in the nova-conductor log at all.

Until I get some details I have no idea how to further answer the 
question (or even if there *is* a question still?).


[1] http://lists.openstack.org/pipermail/openstack/2018-August/046804.html


By the way, I am using the Queens release.

Regards,





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Log files on exceeding cpu allocation limit

2018-08-07 Thread Jay Pipes

On 08/07/2018 10:57 AM, Cody wrote:

Hi everyone,

I intentionally triggered an error by launching more instances than it 
is allowed by the 'cpu_allocation_ratio' set on a compute node. When it 
comes to logs, the only place contained a clue to explain the launch 
failure was in the nova-conductor.log on a controller node. Why there is 
no trace in the nova-scheduler.log (or any other logs) for this type or 
errors?


Because it's not an error.

You exceeded the capacity of your resources, that's all.

Are you asking why there isn't a way to *check* to see whether a 
particular request to launch a VM (or multiple VMs) will exceed the 
capacity of your deployment?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] NUMA some of the time?

2018-07-16 Thread Jay Pipes

On 07/16/2018 10:30 AM, Toni Mueller wrote:


Hi Jay,

On Fri, Jul 06, 2018 at 12:46:04PM -0400, Jay Pipes wrote:

There is no current way to say "On this dual-Xeon compute node, put all
workloads that don't care about dedicated CPUs on this socket and all
workloads that DO care about dedicated CPUs on the other socket.".


it turned out that this is not what I should want to say. What I should
say instead is:

"Run all VMs on all cores, but if certain VMs suddenly spike, give them
all they ask for at the expense of everyone else, and also avoid moving
them around between cores, if possible."

The idea is that these high priority VMs are (probably) idle most of the
time, but at other times need high performance. It was thus deemed to be
a huge waste to reserve cores for them.


You're looking for something like VMWare DRS, then:

https://www.vmware.com/products/vsphere/drs-dpm.html

This isn't something Nova is looking to implement.

Best,
-jay


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] NUMA some of the time?

2018-07-06 Thread Jay Pipes

Hi Tony,

The short answer is that you cannot do that today. Today, each Nova 
compute node is either "all in" for NUMA and CPU pinning or it's not.


This means that for resource-constrained environments like "The Edge!", 
there are not very good ways to finely divide up a compute node and make 
the most efficient use of its resources.


There is no current way to say "On this dual-Xeon compute node, put all 
workloads that don't care about dedicated CPUs on this socket and all 
workloads that DO care about dedicated CPUs on the other socket.".


That said, we have had lengthy discussions about tracking dedicated 
guest CPU resources and dividing up the available logical host 
processors into buckets for "shared CPU" and "dedicated CPU" workloads 
on the following spec:


https://review.openstack.org/#/c/555081/

It is not going to land in Rocky. However, we should be able to make 
good progress towards the goals in that spec in early Stein.


Best,
-jay

On 07/04/2018 11:08 AM, Toni Mueller wrote:


Hi,

I am still trying to figure how to best utilise the small set of
hardware, and discovered the NUMA configuration mechanism. It allows me
to configure reserved cores for certain VMs, but it does not seem to
allow me to say "you can share these cores, but VMs of, say, appropriate
flavour take precedence and will throw you off these cores in case they
need more power".

How can I achieve that, dynamically?

TIA!


Thanks,
Toni


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Compute node on bare metal?

2018-07-02 Thread Jay Pipes

On 07/02/2018 09:45 AM, Houssam ElBouanani wrote:

Hi,

I have recently finished installing a minimal OpenStack Queens 
environment for a school project, and was asked whether it is possible 
to deploy an additional compute node on bare metal, aka without an 
underlying operating system, in order to eliminate the operating system 
overhead and thus to maximize performance.


Whomever asked you about this must be confusing a *hypervisor* with an 
operating system. Using baremetal means you eliminate the overhead of 
the *hypervisor* (virtualization). It doesn't mean you eliminate the 
operating system. You can't do much of anything with a baremetal machine 
that has no operating system on it.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [masakari] HA Compute & Instance Evacuation

2018-05-02 Thread Jay Pipes

On 05/02/2018 04:39 PM, Torin Woltjer wrote:

 > There is no HA behaviour for compute nodes.
 >
 > You are referring to HA of workloads running on compute nodes, not HA of
 > compute nodes themselves.
It was a mistake for me to say HA when referring to compute and 
instances. Really I want to avoid a situation where one of my compute 
hosts gives up the ghost, and all of the instances are offline until 
someone reboots them on a different host. I would like them to 
automatically reboot on a healthy compute node.


 > Check out Masakari:
 >
 > https://wiki.openstack.org/wiki/Masakari
This looks like the kind of thing I'm searching for.

I'm seeing 3 components here, I'm assuming one goes on compute hosts and 
one or both of the others go on the control nodes?


I don't believe anything goes on the compute nodes, no. I'm pretty sure 
the Masakari API service and engine workers live on controller nodes.



Is there any documentation outlining the procedure for deploying
this? Will there be any problem running the Masakari API service on 2
machines simultaneously, sitting behind HAProxy?
Not sure. I'll leave it up to the Masakari developers to help out here. 
I've added [masakari] topic to the subject line.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] HA Compute & Instance Evacuation

2018-05-02 Thread Jay Pipes

On 05/02/2018 02:43 PM, Torin Woltjer wrote:

I am working on setting up Openstack for HA and one of the last orders of
business is getting HA behavior out of the compute nodes.


There is no HA behaviour for compute nodes.


Is there a project that will automatically evacuate instances from a
downed or failed compute host, and automatically reboot them on their
new host?

Check out Masakari:

https://wiki.openstack.org/wiki/Masakari


I'm curious what suggestions people have about this, or whatever
advice you might have. Is there a best way of getting this
functionality, or anything else I should be aware of?


You are referring to HA of workloads running on compute nodes, not HA of 
compute nodes themselves.


My advice would be to install Kubernetes on one or more VMs (with the 
VMs acting as Kubernetes nodes) and use that project's excellent 
orchestrator for daemonsets/statefulsets which is essentially the use 
case you are describing.


The OpenStack Compute API (implemented in Nova) is not an orchestration 
API. It's a low-level infrastructure API for executing basic actions on 
compute resources.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Meaning of each field of 'hypervisor stats show' command.

2018-01-17 Thread Jay Pipes

On 01/17/2018 12:46 PM, Jorge Luiz Correa wrote:
Hi, I would like some help to understand what does means each field in 
output of the command 'openstack hypervisor stats show':


it's an amalgamation of legacy information that IMHO should be 
deprecated from the Compute API.


FWIW, the "implementation" for this API response is basically just a 
single SQL statement issued against each Nova cell DB:


https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L755


$ openstack hypervisor stats show
+--+-+
| Field| Value   |
+--+-+
| count| 5   |


number of hypervisor hosts in the system that are not disabled.


| current_workload | 0   |


The SUM of active boot/reboot/migrate/resize operations going on for all 
the hypervisor hosts.


What actions represent "workload"? See here:

https://github.com/openstack/nova/blob/master/nova/compute/stats.py#L45


| disk_available_least | 1848|


who knows? it's dependent on the virt driver and the disk image backing 
file and about as reliable as a one-armed guitar player.



| free_disk_gb | 1705|


theoretically should be sum(local_gb - local_gb_used) for all hypervisor 
hosts.



| free_ram_mb  | 2415293 |


theoretically should be sum(memory_mb - memory_mb_used) for all 
hypervisor hosts.



| local_gb | 2055|


amount of space, in GB, available for ephemeral disk images on the 
hypervisor hosts. if shared storage is used, this value is as useful as 
having two left feet.



| local_gb_used| 350 |


the amount of storage used for ephemeral disk images of instances on the 
hypervisor hosts. if the instances are boot-from-volume, this number is 
about as valuable as a three-dollar bill.



| memory_mb| 2579645 |


the total amount of RAM the hypervisor hosts have. this does not take 
into account the amount of reserved memory the host might have configured.



| memory_mb_used   | 164352  |


the total amount of memory allocated to guest VMs on the hypervisor hosts.


| running_vms  | 13  |


the total number of VMs on all the hypervisor hosts that are NOT in the 
DELETED or SHELVED_OFFLOADED states.


https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py#L78


| vcpus| 320 |


total amount of physical CPU core-threads across all hypervisor hosts.


| vcpus_used   | 75  |
+--+-+


total number of vCPUs allocated to guests (regardless of VM state) 
across the hypervisor hosts.


Best,
-jay



Anyone could indicate the documentation that explain each one? Some of 
them is clear but others are not.


Thanks!

- JLC


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Production deployment mirantis vs tripleO

2018-01-15 Thread Jay Pipes

On 01/15/2018 12:58 PM, Satish Patel wrote:

But Fuel is active project, isn't it?

https://docs.openstack.org/fuel-docs/latest/


No, it is no longer developed or supported.

-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack] [ironic] Does Ironic support that different nova-compute map to different ironic endpoint?

2018-01-02 Thread Jay Pipes

On 01/02/2018 09:10 AM, Guo James wrote:

I mean that there are two nova-compute In a OpenStack environment.
Every nova-compute are configured to map to baremental.
They communicate with different ironic endpoint.


I see. So, two different ironic-api service endpoints.


That means there are two ironic, a nova, a neutron in a OpenStack environment

Does everything go well?


Sure, that should work just fine.

Best,
-jay


Thanks


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, January 02, 2018 8:59 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] [openstack] [ironic] Does Ironic support that different
nova-compute map to different ironic endpoint?

On 01/02/2018 06:09 AM, Guo James wrote:

Hi guys
I know that Ironic has support multi-nova-compute.
But I am not sure whether OpenStack support the situation than every
nova-compute has a unshare ironic And these ironic share a nova and a
neutron


I'm not quite following you... what do you mean by "has a unshare ironic"?

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack] [ironic] Does Ironic support that different nova-compute map to different ironic endpoint?

2018-01-02 Thread Jay Pipes

On 01/02/2018 06:09 AM, Guo James wrote:

Hi guys
I know that Ironic has support multi-nova-compute.
But I am not sure whether OpenStack support the situation than every 
nova-compute has a unshare ironic
And these ironic share a nova and a neutron


I'm not quite following you... what do you mean by "has a unshare ironic"?

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Flavor metadata quota doesn't work

2017-12-01 Thread Jay Pipes

On 12/01/2017 08:57 AM, si...@turka.nl wrote:

Hi,

I have created a flavor with the following metadata:
quota:disk_write_bytes_sec='10240'

This should limit writing to disk to 10240 bytes (10KB/s). I also tried it
with a higher number (100MB/s).

Using the flavor I have launched an instance and ran a write speed test.

For an unknown reason, the metadata seems to be ingored, since I can write
with 500+ MB/s to the disk:

[centos@vmthresholdtest ~]$ dd if=/dev/zero of=file.bin bs=100M count=15
conv=fdatasync
15+0 records in
15+0 records out
1572864000 bytes (1,6 GB) copied, 2,78904 s, 564 MB/s
[centos@vmthresholdtest ~]$

Running Newton.


Yeah, that functionality doesn't work. Really, not sure if it ever did.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Traits for filter

2017-11-17 Thread Jay Pipes

On 11/17/2017 01:09 AM, Ramu, MohanX wrote:

Thank you Jay.

I am trying to understand usage of custom traits is that

I have a custom trait called "CUSTOM_ABC" which is associated with Resource Provider "Resource 
provider-1" , So I can launch the instance which is having flavor/image associated with same custom 
traits (CUSTOM_ABC) only on the Resource Provider "Resource provider-1" .


As mentioned in my response below, we're currently working on adding 
this functionality to Nova for the Queens release. The work is in this 
patch series:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:validate_provider_summaries

You will need to wait for the Queens release for the complete 
traits-based scheduling functionality to be operational.


Best,
-jay


-----Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Thursday, November 16, 2017 9:10 PM
To: Ramu, MohanX 
Cc: openstack@lists.openstack.org
Subject: Re: Traits for filter

On 11/16/2017 12:06 AM, Ramu, MohanX wrote:

Hi All,

I have a use case that I  need to apply some filter (Custom traits)
while Placement API fetch the resource providers for launching instance.

So that I can have list of resource provided which meets my
condition/filter/validation. The validation is nothing but trust about
the Host(compute node) where I am going to launch the instances.

The below link says that it is possible, don't have idea how to
implement/test this scenario.

https://specs.openstack.org/openstack/nova-specs/specs/ocata/implement
ed/resource-providers-scheduler-db-filters.html

we would rather make a HTTP call to the placement API on a specific
REST resource with a request that would return the list of resource
providers' UUIDs that would match requested resources and traits
criterias based on the original RequestSpec object.


Unfortunately, you're going to need to wait for this to be possible with the 
placement API. We're making progress here, but it's not complete yet.

You won't be using a custom filter (or any filter at all in the 
nova-scheduler). Rather, you'll simply have the required trait in the image or 
flavor and nova-scheduler will ask placement API for all providers that have 
the required traits and requested resource amounts.

We're probably 3-4 weeks away from having this code merged.

Best,
-jay



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Traits for filter

2017-11-16 Thread Jay Pipes

On 11/16/2017 12:06 AM, Ramu, MohanX wrote:

Hi All,

I have a use case that I  need to apply some filter (Custom traits) 
while Placement API fetch the resource providers for launching instance.


So that I can have list of resource provided which meets my 
condition/filter/validation. The validation is nothing but trust about 
the Host(compute node) where I am going to launch the instances.


The below link says that it is possible, don’t have idea how to 
implement/test this scenario.


https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/resource-providers-scheduler-db-filters.html

we would rather make a HTTP call to the placement API on a specific REST 
resource with a request that would return the list of resource 
providers’ UUIDs that would match requested resources and traits 
criterias based on the original RequestSpec object.


Unfortunately, you're going to need to wait for this to be possible with 
the placement API. We're making progress here, but it's not complete yet.


You won't be using a custom filter (or any filter at all in the 
nova-scheduler). Rather, you'll simply have the required trait in the 
image or flavor and nova-scheduler will ask placement API for all 
providers that have the required traits and requested resource amounts.


We're probably 3-4 weeks away from having this code merged.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Traits is not working

2017-10-06 Thread Jay Pipes

On 10/06/2017 10:18 AM, Ramu, MohanX wrote:

Hi Jay,

I am able to create custom traits without any issue. Want to associate some 
value to that traits.


Like I mentioned in the previous email, that's not how traits work :)

A trait *is* the value that is associated with a resource provider.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Traits is not working

2017-10-05 Thread Jay Pipes

On 10/05/2017 05:26 AM, Ramu, MohanX wrote:

Hi Jay,

I want to create the Custom traits with the property value attached to it.

FOR Ex:

I want to have CUSTOM_xyz traits with "status": "true".


That's not a trait :) That's a status indicator.

Traits are simple string tags that represent a single-valued thing or 
capability. A status is a multi-valued field.



When the CUSTOM_xyz traits is associated to a resource provider  I should be 
able to see the status value is true or not.


A trait is either associated with a resource provider or it isn't. When 
you do a call to `GET /resource_providers/{rp_uuid}/traits` what is 
returned is a list of the traits the resource provider with UUID 
{rp_uuid} has associated with it.



I referred below link to create custom traits , not able to create.

https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/resource-provider-traits.html

PUT /resource_providers/{uuid}/traits

This API is to associate traits with specified resource provider. All the 
associated traits will be replaced by the traits specified in the request body. 
Nova-compute will report the compute node traits through this API.

The body of the request must match the following JSONSchema document:

{
 "type": "object",
 "properties": {
 "traits": {
 "type": "array",
 "items": CUSTOM_TRAIT
 },
 "resource_provider_generation": {
 "type": "integer"
 }
 },
 'required': ['traits', 'resource_provider_generation'],
 'additionalProperties': False
}


I suspect the issue you're having is that you need to create the custom 
trait first and *then* associate that trait with one or more resource 
providers.


To create the trait, do:

PUT /traits/CUSTOM_XYZ

and then associate it to a resource provider by doing:

PUT /resource_provider/{rp_uuid}/traits
{
  "resource_provider_generation": 1,
  "traits": [
 "CUSTOM_XYZ"
  ]
}

BTW, a great place to see examples of both good and bad API usage is to 
check out the Gabbit functional API tests for the placement API. Here is 
the set of tests for the traits functionality:


https://github.com/openstack/nova/blob/master/nova/tests/functional/api/openstack/placement/gabbits/traits.yaml

Best,
-jay



Thanks & Regards,

Mohan Ramu
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Wednesday, October 4, 2017 7:06 PM
To: Ramu, MohanX ; openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

Rock on :)

On 10/04/2017 09:33 AM, Ramu, MohanX wrote:

Thank you so much Jay. After adding this header, working fine.

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 11:36 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

Against the Pike placement API endpoint, make sure you send the following HTTP 
header:

OpenStack-API-Version: placement 1.10

Best,
-jay

On 10/03/2017 02:01 PM, Ramu, MohanX wrote:

Please refer attached original one.


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 10:03 PM
To: Ramu, MohanX ;
openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

On 10/03/2017 12:12 PM, Ramu, MohanX wrote:

Thanks for reply Jay.

No Jay,

I have installed Pike. There also I face the same problem.


No, you haven't installed Pike (or at least not properly). Otherwise, the 
max_version returned from the Pike placement API would be 1.10, not 1.4.

Best,
-jay


Thanks & Regards,

Mohan Ramu
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 9:26 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

On 10/03/2017 11:34 AM, Ramu, MohanX wrote:

Hi,

We have implemented OpenStack Ocata and Pike releases, able to
consume Placement resource providers API, not able to consume resource class 
APIs’.

I tried to run Triats API in Pike set up too. I am not able to run
any Traits API.

As per the Open Stack doc, the Placement API URL is a base URL for
Traits also. I am able to run Placement API as per the given doc,
not able to run/access the Traits APIs’ . Getting 404 (Not Found error).


The /traits REST endpoint is part of the Placement API, yes.


As mentioned in below link, the placement-manage os-traits
sync/command is not working, it says that command not found.


This means you have not installed (or updated) packages for Pike.


https://specs.openstack.org/openstack/nova-specs/specs/pike/approve
d
/
r
esource-provider-traits.html

Pike – Placement API version is 1.0 to 1.10

Ocata – Placement API version is 1.0 to 1.4 which support

We got  404

Re: [Openstack] Traits is not working

2017-10-04 Thread Jay Pipes

Rock on :)

On 10/04/2017 09:33 AM, Ramu, MohanX wrote:

Thank you so much Jay. After adding this header, working fine.

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 11:36 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

Against the Pike placement API endpoint, make sure you send the following HTTP 
header:

OpenStack-API-Version: placement 1.10

Best,
-jay

On 10/03/2017 02:01 PM, Ramu, MohanX wrote:

Please refer attached original one.


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 10:03 PM
To: Ramu, MohanX ;
openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

On 10/03/2017 12:12 PM, Ramu, MohanX wrote:

Thanks for reply Jay.

No Jay,

I have installed Pike. There also I face the same problem.


No, you haven't installed Pike (or at least not properly). Otherwise, the 
max_version returned from the Pike placement API would be 1.10, not 1.4.

Best,
-jay


Thanks & Regards,

Mohan Ramu
-Original Message-----
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 9:26 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

On 10/03/2017 11:34 AM, Ramu, MohanX wrote:

Hi,

We have implemented OpenStack Ocata and Pike releases, able to
consume Placement resource providers API, not able to consume resource class 
APIs’.

I tried to run Triats API in Pike set up too. I am not able to run
any Traits API.

As per the Open Stack doc, the Placement API URL is a base URL for
Traits also. I am able to run Placement API as per the given doc,
not able to run/access the Traits APIs’ . Getting 404 (Not Found error).


The /traits REST endpoint is part of the Placement API, yes.


As mentioned in below link, the placement-manage os-traits
sync/command is not working, it says that command not found.


This means you have not installed (or updated) packages for Pike.


https://specs.openstack.org/openstack/nova-specs/specs/pike/approved
/
r
esource-provider-traits.html

Pike – Placement API version is 1.0 to 1.10

Ocata – Placement API version is 1.0 to 1.4 which support

We got  404 only, It seems there is a disconnect btw Placement and
Triats. Need to understand that are we missing any configuration.


You do not have Pike installed. You have Ocata installed. You need to upgrade 
to Pike.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Traits is not working

2017-10-03 Thread Jay Pipes
Against the Pike placement API endpoint, make sure you send the 
following HTTP header:


OpenStack-API-Version: placement 1.10

Best,
-jay

On 10/03/2017 02:01 PM, Ramu, MohanX wrote:

Please refer attached original one.


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 10:03 PM
To: Ramu, MohanX ; openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

On 10/03/2017 12:12 PM, Ramu, MohanX wrote:

Thanks for reply Jay.

No Jay,

I have installed Pike. There also I face the same problem.


No, you haven't installed Pike (or at least not properly). Otherwise, the 
max_version returned from the Pike placement API would be 1.10, not 1.4.

Best,
-jay


Thanks & Regards,

Mohan Ramu
-Original Message-----
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 9:26 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

On 10/03/2017 11:34 AM, Ramu, MohanX wrote:

Hi,

We have implemented OpenStack Ocata and Pike releases, able to
consume Placement resource providers API, not able to consume resource class 
APIs’.

I tried to run Triats API in Pike set up too. I am not able to run
any Traits API.

As per the Open Stack doc, the Placement API URL is a base URL for
Traits also. I am able to run Placement API as per the given doc, not
able to run/access the Traits APIs’ . Getting 404 (Not Found error).


The /traits REST endpoint is part of the Placement API, yes.


As mentioned in below link, the placement-manage os-traits
sync/command is not working, it says that command not found.


This means you have not installed (or updated) packages for Pike.


https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/
r
esource-provider-traits.html

Pike – Placement API version is 1.0 to 1.10

Ocata – Placement API version is 1.0 to 1.4 which support

We got  404 only, It seems there is a disconnect btw Placement and
Triats. Need to understand that are we missing any configuration.


You do not have Pike installed. You have Ocata installed. You need to upgrade 
to Pike.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Traits is not working

2017-10-03 Thread Jay Pipes

On 10/03/2017 12:12 PM, Ramu, MohanX wrote:

Thanks for reply Jay.

No Jay,

I have installed Pike. There also I face the same problem.


No, you haven't installed Pike (or at least not properly). Otherwise, 
the max_version returned from the Pike placement API would be 1.10, not 1.4.


Best,
-jay


Thanks & Regards,

Mohan Ramu
-Original Message-----
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 9:26 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working

On 10/03/2017 11:34 AM, Ramu, MohanX wrote:

Hi,

We have implemented OpenStack Ocata and Pike releases, able to consume
Placement resource providers API, not able to consume resource class APIs’.

I tried to run Triats API in Pike set up too. I am not able to run any
Traits API.

As per the Open Stack doc, the Placement API URL is a base URL for
Traits also. I am able to run Placement API as per the given doc, not
able to run/access the Traits APIs’ . Getting 404 (Not Found error).


The /traits REST endpoint is part of the Placement API, yes.


As mentioned in below link, the placement-manage os-traits
sync/command is not working, it says that command not found.


This means you have not installed (or updated) packages for Pike.


https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/r
esource-provider-traits.html

Pike – Placement API version is 1.0 to 1.10

Ocata – Placement API version is 1.0 to 1.4 which support

We got  404 only, It seems there is a disconnect btw Placement and
Triats. Need to understand that are we missing any configuration.


You do not have Pike installed. You have Ocata installed. You need to upgrade 
to Pike.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Traits is not working

2017-10-03 Thread Jay Pipes

On 10/03/2017 11:34 AM, Ramu, MohanX wrote:

Hi,

We have implemented OpenStack Ocata and Pike releases, able to consume 
Placement resource providers API, not able to consume resource class APIs’.


I tried to run Triats API in Pike set up too. I am not able to run any 
Traits API.


As per the Open Stack doc, the Placement API URL is a base URL for 
Traits also. I am able to run Placement API as per the given doc, not 
able to run/access the Traits APIs’ . Getting 404 (Not Found error).


The /traits REST endpoint is part of the Placement API, yes.

As mentioned in below link, the placement-manage os-traits sync/command 
is not working, it says that command not found.


This means you have not installed (or updated) packages for Pike.


https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html

Pike – Placement API version is 1.0 to 1.10

Ocata – Placement API version is 1.0 to 1.4 which support

We got  404 only, It seems there is a disconnect btw Placement and 
Triats. Need to understand that are we missing any configuration.


You do not have Pike installed. You have Ocata installed. You need to 
upgrade to Pike.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] extend attached volumes

2017-09-26 Thread Jay Pipes

On 09/26/2017 10:20 AM, Volodymyr Litovka wrote:

Hi Jay,

I know about this way :-) but Pike introduced ability to resize attached 
volumes:


"It is now possible to signal and perform an online volume size change 
as of the 2.51 microversion using the|volume-extended|external event. 
Nova will perform the volume extension so the host can detect its new 
size. It will also resize the device in QEMU so instance can detect the 
new disk size without rebooting." -- 
https://docs.openstack.org/releasenotes/nova/pike.html


Apologies, Volodymyr, I wasn't aware of that ability!

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] extend attached volumes

2017-09-26 Thread Jay Pipes

Detach the volume, then resize it, then re-attach.

Best,
-jay

On 09/26/2017 09:22 AM, Volodymyr Litovka wrote:

Colleagues,

can't find ways to resize attached volume. I'm on Pike.

As far as I understand, it required to be supported in Nova, because 
Cinder need to check with Nova whether it's possible to extend this volume.


Well,
- Nova's API microversion is 2.51, which seems to be enough to support 
"volume-extended" API call
- Properties of image are *hw_disk_bus='scsi'* and 
*hw_scsi_model='virtio-scsi'*, type bare/raw, located in Cinder

- hypervisor is KVM
- volume is bootable, mounted as root, created as snapshot from Cinder 
volume

- Cinder's backend is CEPH/Bluestore

and both "cinder extend" and "openstack volume set --size" returns 
"Volume status must be '{'status': 'available'}' to extend, currently 
in-use".


I did not find any configuration options neither in nova nor in cinder 
config files, which can help with this functionality.


What I'm doing wrong?

Thank you.

--
Volodymyr Litovka
   "Vision without Execution is Hallucination." -- Thomas Edison



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] Can Mitaka RamFilter support free hugepages?

2017-09-06 Thread Jay Pipes

Sahid, Stephen, what are your thoughts on this?

On 09/06/2017 10:17 PM, Yaguang Tang wrote:
I think the fact that RamFilter can't deal with huge pages is a bug , 
duo to this limit, we have to set a balance  between normal memory and 
huge pages to use RamFilter and NUMATopologyFilter. what do you think Jay?



On Wed, Sep 6, 2017 at 9:22 PM, Jay Pipes <mailto:jaypi...@gmail.com>> wrote:


On 09/06/2017 01:21 AM, Weichih Lu wrote:

Thanks for your response.

Is this mean if I want to create an instance with flavor: 16G
memory (hw:mem_page_size=large), I need to preserve memory more
than 16GB ?
This instance consume hugepages resource.


You need to reserve fewer 1GB huge pages than 50 if you want to
launch a 16GB instance on a host with 64GB of RAM. Try reserving 32
1GB huge pages.

Best,
-jay

2017-09-06 1:47 GMT+08:00 Jay Pipes mailto:jaypi...@gmail.com> <mailto:jaypi...@gmail.com
<mailto:jaypi...@gmail.com>>>:


 Please remember to add a topic [nova] marker to your
subject line.
 Answer below.

 On 09/05/2017 04:45 AM, Weichih Lu wrote:

 Dear all,

 I have a compute node with 64GB ram. And I set 50
hugepages wiht
 1GB hugepage size. I used command "free", it shows free
memory
 is about 12GB. And free hugepages is 50.


 Correct. By assigning hugepages, you use the memory
allocated to the
 hugepages.

 Then I launch an instance with 16GB memory, set flavor
tag :
 hw:mem_page_size=large. It show Error: No valid host
was found.
 There are not enough hosts available.


 Right, because you have only 12G of RAM available after
 creating/allocating 50G out of your 64G.

 Huge pages are entirely separate from the normal memory that a
 flavor consumes. The 16GB memory in your flavor is RAM
consumed on
 the host. The huge pages are individual things that are
consumed by
 the NUMA topology that your instance will take. RAM != huge
pages.
 Totally different things.

   And I check nova-scheduler log. My

 compute is removed by RamFilter. I can launch an
instance with
 8GB memory successfully, or I can launch an instance
with 16GB
 memory sucessfully by remove RamFilter.


 That's because RamFilter doesn't deal with huge pages.
Because huge
 pages are a different resource than memory. The page itself
is the
 resource.

 The NUMATopologyFilter is the scheduler filter that
evaluates the
 huge page resources on a compute host and determines if the
there
 are enough *pages* available for the instance. Note that I say
 *pages* because the unit of resource consumption for huge
pages is
 not MB of RAM. It's a single memory page.

 Please read this excellent article by Steve Gordon for
information
 on what NUMA and huge pages are and how to use them in Nova:


http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/

<http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/>

<http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/


<http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/>>

 Best,
 -jay

 Does RamFilter only check free memory but not free
hugepages?
 How can I solve this problem?

 I use openstack mitaka version.

 thanks

 WeiChih, Lu.

 Best Regards.


 ___
 Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>

<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>>
 Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
 <mailto:openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>>
 Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://li

Re: [Openstack] [OpenStack] Can Mitaka RamFilter support free hugepages?

2017-09-06 Thread Jay Pipes

On 09/06/2017 01:21 AM, Weichih Lu wrote:

Thanks for your response.

Is this mean if I want to create an instance with flavor: 16G memory 
(hw:mem_page_size=large), I need to preserve memory more than 16GB ?

This instance consume hugepages resource.


You need to reserve fewer 1GB huge pages than 50 if you want to launch a 
16GB instance on a host with 64GB of RAM. Try reserving 32 1GB huge pages.


Best,
-jay

2017-09-06 1:47 GMT+08:00 Jay Pipes <mailto:jaypi...@gmail.com>>:


Please remember to add a topic [nova] marker to your subject line.
Answer below.

On 09/05/2017 04:45 AM, Weichih Lu wrote:

Dear all,

I have a compute node with 64GB ram. And I set 50 hugepages wiht
1GB hugepage size. I used command "free", it shows free memory
is about 12GB. And free hugepages is 50.


Correct. By assigning hugepages, you use the memory allocated to the
hugepages.

Then I launch an instance with 16GB memory, set flavor tag :
hw:mem_page_size=large. It show Error: No valid host was found.
There are not enough hosts available.


Right, because you have only 12G of RAM available after
creating/allocating 50G out of your 64G.

Huge pages are entirely separate from the normal memory that a
flavor consumes. The 16GB memory in your flavor is RAM consumed on
the host. The huge pages are individual things that are consumed by
the NUMA topology that your instance will take. RAM != huge pages.
Totally different things.

  And I check nova-scheduler log. My

compute is removed by RamFilter. I can launch an instance with
8GB memory successfully, or I can launch an instance with 16GB
memory sucessfully by remove RamFilter.


That's because RamFilter doesn't deal with huge pages. Because huge
pages are a different resource than memory. The page itself is the
resource.

The NUMATopologyFilter is the scheduler filter that evaluates the
huge page resources on a compute host and determines if the there
are enough *pages* available for the instance. Note that I say
*pages* because the unit of resource consumption for huge pages is
not MB of RAM. It's a single memory page.

Please read this excellent article by Steve Gordon for information
on what NUMA and huge pages are and how to use them in Nova:


http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/

<http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/>

Best,
-jay

Does RamFilter only check free memory but not free hugepages?
How can I solve this problem?

I use openstack mitaka version.

thanks

WeiChih, Lu.

Best Regards.


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] Can Mitaka RamFilter support free hugepages?

2017-09-05 Thread Jay Pipes
Please remember to add a topic [nova] marker to your subject line. 
Answer below.


On 09/05/2017 04:45 AM, Weichih Lu wrote:

Dear all,

I have a compute node with 64GB ram. And I set 50 hugepages wiht 1GB 
hugepage size. I used command "free", it shows free memory is about 
12GB. And free hugepages is 50.


Correct. By assigning hugepages, you use the memory allocated to the 
hugepages.


Then I launch an instance with 16GB memory, set flavor tag 
: hw:mem_page_size=large. It show Error: No valid host was found. There 
are not enough hosts available.


Right, because you have only 12G of RAM available after 
creating/allocating 50G out of your 64G.


Huge pages are entirely separate from the normal memory that a flavor 
consumes. The 16GB memory in your flavor is RAM consumed on the host. 
The huge pages are individual things that are consumed by the NUMA 
topology that your instance will take. RAM != huge pages. Totally 
different things.


 And I check nova-scheduler log. My
compute is removed by RamFilter. I can launch an instance with 8GB 
memory successfully, or I can launch an instance with 16GB memory 
sucessfully by remove RamFilter.


That's because RamFilter doesn't deal with huge pages. Because huge 
pages are a different resource than memory. The page itself is the resource.


The NUMATopologyFilter is the scheduler filter that evaluates the huge 
page resources on a compute host and determines if the there are enough 
*pages* available for the instance. Note that I say *pages* because the 
unit of resource consumption for huge pages is not MB of RAM. It's a 
single memory page.


Please read this excellent article by Steve Gordon for information on 
what NUMA and huge pages are and how to use them in Nova:


http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/

Best,
-jay


Does RamFilter only check free memory but not free hugepages?
How can I solve this problem?

I use openstack mitaka version.

thanks

WeiChih, Lu.

Best Regards.


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ocata The placement API endpoint not found on Ubuntu

2017-08-21 Thread Jay Pipes

On 08/18/2017 08:50 AM, Divneet Singh wrote:
Hello, I have trying to install ocata on Ubuntu 16.04 , for the time 
being i have 2 nodes . just can't figure this out.


I have setup Placement API. But get error after restart nova service or 
reboot


" 017-08-18 08:27:41.496 1422 WARNING nova.scheduler.client.report 
[req-17911703-827e-402d-85e8-a0bb25003fe3 - - - - -] The placement API 
endpoint not found. Placement is optional in Newton, but required in 
Ocata. Please enable the placement service before upgrading.  "


And on the controller node when I run the command .
openstack@controller:~$ sudo nova-status  upgrade check
+---+
| Upgrade Check Results   |
+---+
| Check: Cells v2|
| Result: Success |
| Details: None|
+---+
| Check: Placement API|
| Result: Failure |
| Details: Placement API endpoint not found.|
+---+
| Check: Resource Providers |
| Result: Warning |
| Details: There are no compute resource providers in the Placement |
|   service but there are 1 compute nodes in the deployment.|
|   This means no compute nodes are reporting into the  |
|   Placement service and need to be upgraded and/or fixed. |
|   See |
| http://docs.openstack.org/developer/nova/placement.html 
 |

|   for more details.

I followed the the ocata guide given in the documentation by the letter .

After a feedback i got , just to make sure placement service configured 
in the service catalog:

$  openstack catalog show placement
+---++
| Field | Value  |
+---++
| endpoints | RegionOne  |
|   |   admin: http://controller:8778|
|   | RegionOne  |
|   |   public: http://controller:8778   |
|   | RegionOne  |
|   |   internal: http://controller:8778 |
|   ||
| id| 825f1a56d9a4438d9f54d893a7b227c0   |
| name  | placement  |
| type  | placement  |
+---++

$ export TOKEN=$(openstack token issue -f value -c id)
$ curl -H "x-auth-token: $TOKEN" $PLACEMENT
{"versions": [{"min_version": "1.0", "max_version": "1.4", "id": "v1.0"}]}

I think this means that Placement service is configured correctly .

Do i need to configure a web server on the compute node  ?


No, you definitely do not need to configure a web server on the compute 
node.


My guess is that the [keystone_authtoken] section of your nova.conf file 
on either or both of the controller and compute nodes is not correct or 
doesn't match what you have in your rc file for the openstack client.


The nova-status command and the service daemons in Nova do not get their 
connection information from the rc file that the openstack client uses. 
Instead, they look in the [keystone_authtoken] section of the nova.conf 
files.


So, make sure that your [keystone_authtoken] section of nova.conf files 
contain proper information according to this documentation:


https://docs.openstack.org/ocata/config-reference/compute/nova-conf-samples.html

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How do you manage your openstack utilisation?

2017-07-24 Thread Jay Pipes

On 07/23/2017 07:51 PM, Manuel Sopena Ballesteros wrote:

Dear Openstack community,

We are a medical research institute and we have been running HPC for 
many years, we started playing with Openstack a few months ago and we 
like it’s flexibility to deploy multiple environments. However we are 
quite concern about the resource utilization, what I mean is that, in 
HPC the resources are released for the rest of the community once job 
has finished, however a VM keeps the resources for the owner of the vm 
until the instance is killed.


I would like to ask, how do you organize your resources used by 
Openstack to maximize utilization across the organization?


Thank you very much


Hi Manuel,

You may wish to check out the Blazar project or get in touch with their 
contributor team:


https://wiki.openstack.org/wiki/Blazar

Blazar adds a reservable dimension to Nova's compute service and there 
is a notion of something that "cleans up" VMs after their reservation is 
expired. It may fit your needs here.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Database not delete PCI info after device is removed from host and nova.conf

2017-07-10 Thread Jay Pipes
Unfortunately, Eddie, I'm not entirely sure what is going on with your 
situation. According to the code, the non-existing PCI device should be 
removed from the pci_devices table when the PCI manager notices the PCI 
device is no longer on the local host...


On 07/09/2017 08:36 PM, Eddie Yen wrote:

Hi there,

Does the information already enough or need additional items?

Thanks,
Eddie.

2017-07-07 10:49 GMT+08:00 Eddie Yen <mailto:missile0...@gmail.com>>:


Sorry,

Re-new the nova-compute log after remove "1002:68c8" and restart
nova-compute.
http://paste.openstack.org/show/qUCOX09jyeMydoYHc8Oz/
<http://paste.openstack.org/show/qUCOX09jyeMydoYHc8Oz/>

2017-07-07 10:37 GMT+08:00 Eddie Yen mailto:missile0...@gmail.com>>:

Hi Jay,

Below are few logs and information you may want to check.



I wrote GPU inforamtion into nova.conf like this.

pci_passthrough_whitelist = [{ "product_id":"0ff3",
"vendor_id":"10de"}, { "product_id":"68c8", "vendor_id":"1002"}]

pci_alias = [{ "product_id":"0ff3", "vendor_id":"10de",
"device_type":"type-PCI", "name":"k420"}, { "product_id":"68c8",
"vendor_id":"1002", "device_type":"type-PCI", "name":"v4800"}]


Then restart the services.

nova-compute log when insert new GPU device info into nova.conf
and restart service:
http://paste.openstack.org/show/z015rYGXaxYhVoafKdbx/
<http://paste.openstack.org/show/z015rYGXaxYhVoafKdbx/>

Strange is, the log shows that resource tracker only collect
information of new setup GPU, not included the old one.


But If I do some actions on the instance contained old GPU, the
tracker will get both GPU.
http://paste.openstack.org/show/614658/
<http://paste.openstack.org/show/614658/>

Nova database shows correct information on both GPU
http://paste.openstack.org/show/8JS0i6BMitjeBVRJTkRo/
<http://paste.openstack.org/show/8JS0i6BMitjeBVRJTkRo/>



Now remove ID "1002:68c8" from nova.conf and compute node, and
restart services.

The pci_passthrough_whitelist and pci_alias only keep
"10de:0ff3" GPU info.

pci_passthrough_whitelist = { "product_id":"0ff3",
"vendor_id":"10de" }

pci_alias = { "product_id":"0ff3", "vendor_id":"10de",
"device_type":"type-PCI", "name":"k420" }


nova-compute log shows resource tracker report node only have
"10de:0ff3" PCI resource
http://paste.openstack.org/show/VjLinsipne5nM8o0TYcJ/
<http://paste.openstack.org/show/VjLinsipne5nM8o0TYcJ/>

But in Nova database, "1002:68c8" still exist, and stayed in
"Available" status. Even "deleted" value shows not zero.
http://paste.openstack.org/show/SnJ8AzJYD6wCo7jslIc2/
<http://paste.openstack.org/show/SnJ8AzJYD6wCo7jslIc2/>


Many thanks,
Eddie.

2017-07-07 9:05 GMT+08:00 Eddie Yen mailto:missile0...@gmail.com>>:

Uh wait,

Is that possible it still shows available if PCI device
still exist in the same address?

        Because when I remove the GPU card, I replace it to a SFP+
network card in the same slot.
So when I type lspci the SFP+ card stay in the same address.

But it still doesn't make any sense because these two cards
definitely not a same VID:PID.
And I set the information as VID:PID in nova.conf


I'll try reproduce this issue and put a log on this list.

Thanks,

2017-07-07 9:01 GMT+08:00 Jay Pipes mailto:jaypi...@gmail.com>>:

Hmm, very odd indeed. Any way you can save the
nova-compute logs from when you removed the GPU and
        restarted the nova-compute service and paste those logs
to paste.openstack.org <http://paste.openstack.org>?
Would be useful in tracking down this buggy behaviour...

Best,
-jay

On 07/06/2017 08:54 PM, Eddie Yen wrote:

Hi Jay,

The status of the "removed" GPU still shows as
"Available" in pci_devices table.

2017-07-07 8:34 GMT+08:00 Jay Pipes
mailto:jaypi...@gmail.com>
<mai

Re: [Openstack] get_diagnostics runs on shutdown instances, and raises exception.

2017-07-07 Thread Jay Pipes

On 07/07/2017 01:37 PM, Peter Doherty wrote:
Thanks.  I wrongfully assumed it was being run automatically, so with 
that out of my mind, it didn't take too long to figure out what was 
triggering that.  I'm running the Datadog agent, which is the source.  
It generated enough noise in a week I ended up with a million rows in 
the nova.instance_fault table, and the memory footprint of nova-api got 
very large, all of which resulted in multi-minute responses to instance 
list queries.


Heh, yes, that performance issue when doing a list instances with a 
large instance_faults table has come up before. We fixed that in Ocata, 
though:


https://bugs.launchpad.net/nova/+bug/1632247

I can open a bug report about the log messages.  I think it may be in 
the nova/compute/manager.py code, which doesn't seem to gracefully know 
what to do if get_diagnostics is called on a instance that isn't 
running, and results in a lot of useless rows in the instance_fault table.


   @wrap_instance_fault
 def get_diagnostics(self, context, instance):
 """Retrieve diagnostics for an instance on this host."""
 current_power_state = self._get_power_state(context, instance)
 if current_power_state == power_state.RUNNING:
 LOG.info(_LI("Retrieving diagnostics"), context=context,
   instance=instance)
 return self.driver.get_diagnostics(instance)
 else:
 raise exception.InstanceInvalidState(
 attr='power_state',
 instance_uuid=instance.uuid,
 state=instance.power_state,
 method='get_diagnostics')


Yep, that's the code that emits the exception. We should be just 
returning an error to the user instead of raising an exception. And, we 
should not be adding a record to the instance_faults table (which is 
what that @wrap_instance_fault decorator does when it sees an exception 
raised like that).


If you could create a bug on LP for that, I'd very much appreciate it.

All the best,
-jay


Thanks Jay!

-Peter

On Fri, Jul 7, 2017 at 12:50 PM, Jay Pipes <mailto:jaypi...@gmail.com>> wrote:


On 07/07/2017 12:30 PM, Peter Doherty wrote:

Hi,

If I'm interpreting this correctly, nova compute is calling
get_diagnostics on all instances, including ones currently in a
shutdown state.  And then it throws an exception, and adds an
entry into the instance_faults table in the database.

nova-compute logs this message:

2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher   File
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 142, in _dispatch_and_reply
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher executor_callback))
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher   File
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 186, in _dispatch
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher executor_callback)
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher   File
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
line 129, in _do_dispatch
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args)
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher   File
"/usr/lib/python2.7/site-packages/nova/exception.py", line 89,
in wrapped
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher payload)
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher   File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line
195, in __exit__
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher six.reraise(self.type_,
self.value, self.tb)
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher   File
"/usr/lib/python2.7/site-packages/nova/exception.py", line 72,
in wrapped
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher return f(self, context, *args,
**kw)
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line
378, in decorated_function
2017-07-07 16:29:46.184 23077 ERROR
oslo_messaging.rpc.dispatcher kwargs['instance'], e,
sys.exc_

Re: [Openstack] get_diagnostics runs on shutdown instances, and raises exception.

2017-07-07 Thread Jay Pipes

On 07/07/2017 12:30 PM, Peter Doherty wrote:

Hi,

If I'm interpreting this correctly, nova compute is calling 
get_diagnostics on all instances, including ones currently in a shutdown 
state.  And then it throws an exception, and adds an entry into the 
instance_faults table in the database.


nova-compute logs this message:

2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
Traceback (most recent call last):
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 129, in _do_dispatch
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
result = func(ctxt, **new_args)
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in wrapped
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
payload)
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in 
__exit__
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in wrapped
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
return f(self, context, *args, **kw)
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 378, in 
decorated_function
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in 
__exit__
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 366, in 
decorated_function
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
return function(self, context, *args, **kwargs)
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4089, 
in get_diagnostics
2017-07-07 16:29:46.184 23077 ERROR oslo_messaging.rpc.dispatcher 
method='get_diagnostics')


2017-07-07 16:30:10.017 23077 ERROR oslo_messaging.rpc.dispatcher 
InstanceInvalidState: Instance 6ab60005-ccbf-4bc2-95ac-7daf31716754 in 
power_state 4. Cannot get_diagnostics while the instance is in this state.


I don't think it should be trying to gather diags on shutdown instances, 
and if it did, it shouldn't just create a never-ending stream of errors.
If anyone has any info on if this might be a bug that is fixed in the 
latest release, or if I can turn off this behavior, it would be appreciated.


get_diagnostics() doesn't run automatically. Something is triggering a 
call to get_diagnostics() for each instance on the box (the internal 
compute manager only has a get_diagnostics(instance) call that takes one 
instance at a time). Not sure what is triggering that...


I agree with you that ERRORs shouldn't be spewed into the nova-compute 
logs like the above, though. That should be fixed. Would you mind 
submitting a bug for that on Launchpad, Peter?


Thank you!
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Database not delete PCI info after device is removed from host and nova.conf

2017-07-06 Thread Jay Pipes
Hmm, very odd indeed. Any way you can save the nova-compute logs from 
when you removed the GPU and restarted the nova-compute service and 
paste those logs to paste.openstack.org? Would be useful in tracking 
down this buggy behaviour...


Best,
-jay

On 07/06/2017 08:54 PM, Eddie Yen wrote:

Hi Jay,

The status of the "removed" GPU still shows as "Available" in 
pci_devices table.


2017-07-07 8:34 GMT+08:00 Jay Pipes <mailto:jaypi...@gmail.com>>:


Hi again, Eddie :) Answer inline...

On 07/06/2017 08:14 PM, Eddie Yen wrote:

Hi everyone,

I'm using OpenStack Mitaka version (deployed from Fuel 9.2)

In present, I installed two different model of GPU card.

And wrote these information into pci_alias and
pci_passthrough_whitelist in nova.conf on Controller and Compute
(the node which installed GPU).
Then restart nova-api, nova-scheduler,and nova-compute.

When I check database, both of GPU info registered in
pci_devices table.

Now I removed one of the GPU from compute node, and remove the
information from nova.conf, then restart services.

But I check database again, the information of the removed card
still exist in pci_devices table.

How can I do to fix this problem?


So, when you removed the GPU from the compute node and restarted the
nova-compute service, it *should* have noticed you had removed the
GPU and marked that PCI device as deleted. At least, according to
this code in the PCI manager:

https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L168-L183

<https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L168-L183>

Question for you: what is the value of the status field in the
pci_devices table for the GPU that you removed?

Best,
-jay

p.s. If you really want to get rid of that device, simply remove
that record from the pci_devices table. But, again, it *should* be
removed automatically...

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Database not delete PCI info after device is removed from host and nova.conf

2017-07-06 Thread Jay Pipes

Hi again, Eddie :) Answer inline...

On 07/06/2017 08:14 PM, Eddie Yen wrote:

Hi everyone,

I'm using OpenStack Mitaka version (deployed from Fuel 9.2)

In present, I installed two different model of GPU card.

And wrote these information into pci_alias and pci_passthrough_whitelist 
in nova.conf on Controller and Compute (the node which installed GPU).

Then restart nova-api, nova-scheduler,and nova-compute.

When I check database, both of GPU info registered in pci_devices table.

Now I removed one of the GPU from compute node, and remove the 
information from nova.conf, then restart services.


But I check database again, the information of the removed card still 
exist in pci_devices table.


How can I do to fix this problem?


So, when you removed the GPU from the compute node and restarted the 
nova-compute service, it *should* have noticed you had removed the GPU 
and marked that PCI device as deleted. At least, according to this code 
in the PCI manager:


https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L168-L183

Question for you: what is the value of the status field in the 
pci_devices table for the GPU that you removed?


Best,
-jay

p.s. If you really want to get rid of that device, simply remove that 
record from the pci_devices table. But, again, it *should* be removed 
automatically...


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] [nova] How can I release PCI device resources without delete instance?

2017-07-06 Thread Jay Pipes

On 07/06/2017 02:17 AM, Eddie Yen wrote:

Hi, now I got another problem.

I have two models of GPU devices and I set both in pci_alias and 
pcli_passthrogh_whitelist on controller and compute node(with this two 
GPUs).
Now I removed one of GPU and delete its data in nova.conf, then restart 
nova-api, nova-scheduler (Controller) and nova-compute(Compute)


But when I check MySQL, the GPU info which I already removed still in 
pci_devices table.

I remember there's a bug about this case but it already fixed.

How can I fix this issue?


Eddie, please start a new mailing list thread (with a new targeted 
topic) for the above and we'll answer it there to keep the mailing list 
threads properly curated.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] [nova] How can I release PCI device resources without delete instance?

2017-07-05 Thread Jay Pipes

On 07/05/2017 04:18 AM, Eddie Yen wrote:

Hi everyone,

I'm using OpenStack Mitaka (which deployed from Fuel 9.2) and doing GPU 
things.


I got a problem is I need to delete current instance which contains GPU 
to release device if I want assign GPU to another new instance temperately.


I'll got "No valid host was found" if I creating new instance contains 
GPU flavor without delete present instance, even the instance is shutdown.


Is there any way that I just shutdown the instance rather than delete it 
to release GPU device?


As Dinesh mentioned you *can* use shelve for this, but frankly, I think 
the shelve API leads to more problems than it solves (see his response 
about needing to delete the new instance before unshelving).


I'd recommend redesigning your application to be more cloud-native. In 
other words, separate operating system state from application state, use 
volumes for all persistent application state, and do not rely on a 
persistent IP address. [1]


Once you've done that, you can just treat your VMs like cattle and 
terminate them.


Best,
-jay

[1] Please note I did not use the word "container" in this description 
of cloud-native application.


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack][nova] Changes-Since parameter in Nova API not working as expected

2017-06-27 Thread Jay Pipes

Awesome, thanks Jose!

On 06/26/2017 11:12 PM, Jose Renato Santos wrote:

Jay

I created a bug report as you suggested:
https://bugs.launchpad.net/nova/+bug/1700684

Thanks for your help
Best
Renato

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, June 26, 2017 2:32 PM
To: Jose Renato Santos ; 
openstack@lists.openstack.org
Subject: Re: [Openstack] [openstack][nova] Changes-Since parameter in Nova API 
not working as expected

On 06/26/2017 02:27 PM, Jose Renato Santos wrote:

Jay,
Thanks for your response

Let me clarify my point.
I am not expecting to see a change in the updated_at column of a server when 
the rules of its security group changes.
I agree that would be a change to be handled by the Neutron Api, and
would be too much to ask for Nova to keep track of that But I would expect to 
see a change in updated_at column of a server instance when I 
associated(attach) a new security group to that server.
For me that is a change in the server and not on the security group.
The security group was not changed, but the server was, as it is now associated 
with a different set of security groups I hope that clarifies my question.


I think that's a pretty reasonable request actually. Care to create a bug on 
Launchpad for it?

Best,
-jay



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack][nova] Changes-Since parameter in Nova API not working as expected

2017-06-26 Thread Jay Pipes

On 06/26/2017 02:27 PM, Jose Renato Santos wrote:

Jay,
Thanks for your response

Let me clarify my point.
I am not expecting to see a change in the updated_at column of a server when 
the rules of its security group changes.
I agree that would be a change to be handled by the Neutron Api, and would be 
too much to ask for Nova to keep track of that
But I would expect to see a change in updated_at column of a server instance 
when I associated(attach) a new security group to that server.
For me that is a change in the server and not on the security group. The 
security group was not changed, but the server was, as it is now associated 
with a different set of security groups
I hope that clarifies my question.


I think that's a pretty reasonable request actually. Care to create a 
bug on Launchpad for it?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack][nova] Changes-Since parameter in Nova API not working as expected

2017-06-26 Thread Jay Pipes

On 06/26/2017 12:58 PM, Jose Renato Santos wrote:

Hi

I am accessing the nova api using the gophercloud SDK 
https://github.com/rackspace/gophercloud


I am running Openstack Newton installed with Openstack Ansible

I am accessing the “List Servers” call of the nova Api with the 
Changes-Since parameters for efficient polling


https://developer.openstack.org/api-guide/compute/polling_changes-since_parameter.html

However, the API is not working as I expected.

When I stop or start a server instance, the API successfully detects the 
change in the server state and returns the server in the next call to 
ListServers with the Changes-Since parameter, as expected.


But when I attach a new security group to the server, the API does not 
detect any change in the state of the server and does not return the 
server in the next call  to ListServers with the Changes-Since parameter.


I would expect that changing the list of security groups attached to a 
server would be considered a change in the server state and reported 
when using the Changes-Since parameter, but that is not the behavior 
that I am seeing.


Can someone please let me know if this is a known bug?


Changes to an instance's security group rules are not considered when 
listing servers by updated_at field value. This is mostly because the 
security group [rules] are Neutron objects and are not one-to-one 
associated with a Nova instance.


I'm not sure it's a bug per-se, but I suppose we could entertain a 
feature request to set the updated_at timestamp column for all instances 
associated with a security group when that security group's rules are 
changed.


But that would probably open up a can of worms that Nova developers may 
not be willing to deal with. For instance, should we update the 
instances.update_at table every time a volume is changed? a network port 
that an instance is associated with? A heat stack that launched the 
volume? etc etc.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] nova - nova-manage db sync - Issue

2017-06-14 Thread Jay Pipes
You have installed a really old version of Nova on that server. What are 
you using to install OpenStack?


Best,
-jay

On 06/14/2017 12:13 PM, SGopinath s.gopinath wrote:

Hi,

I'm trying to install Openstack Ocata in
Ubuntu 16.04.2 LTS.

During installation of nova  at this step
su -s /bin/sh -c "nova-manage db sync" nova

I get the following error

An error has occurred:
Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 
1594, in main

 ret = fn(*fn_args, **fn_kwargs)
   File "/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 644, 
in sync

 return migration.db_sync(version)
   File "/usr/lib/python2.7/dist-packages/nova/db/migration.py", line 
26, in db_sync
 return IMPL.db_sync(version=version, database=database, 
context=context)
   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py", line 
53, in db_sync

 current_version = db_version(database, context=context)
   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py", line 
84, in db_version

 _("Upgrade DB using Essex release first."))
NovaException: Upgrade DB using Essex release first.


Request the help for solving this issue.

Thanks,
S.Gopinath



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Hello

2017-04-21 Thread Jay Pipes

On 04/21/2017 07:29 AM, TanXin wrote:

I want to know if I subscribe successfully.


yes.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] EC2API Very Slow

2017-04-11 Thread Jay Pipes

On 04/10/2017 04:25 PM, Georgios Dimitrakakis wrote:

Hello again,

after some time...

The "nova list" command is very fast indeed and the only problem I
observer is through the EC2 API.

I have found out the following bug and I believe is related to it, what
do you people think?

https://bugs.launchpad.net/ec2-api/+bug/1619259


Looks like it could be related, yes. Though the bug above describes the 
EC2 metadata API being slow, not the list instances or get instance info 
call.


Can someone from the EC2 API team take a look here?

Best,
-jay


On 03/26/2017 04:06 PM, Georgios Dimitrakakis wrote:

Hello,

can someone let me know if it's an expected behavior the EC2API to be
very slow in Ocata?

I have an old installation of OpenStack (Icehouse) with NOVA-EC2 and
when requesting an instance's info I am getting them back in 9sec.

In a newer Ocata installation with EC2API it takes around 53sec.

On a hardware perspective view the newer version is on a far better (in
terms of specifications) server.

Any ideas what might be the problem and how to resolve it? Or any ideas
how to debug it??


Both 9s and 53s are appalling performance for getting an instance's
info. Something is definitely amiss.

To determine the cause of the slowdown, first try to eliminate
various potential components. You say it's the EC2 API that is slow.
If you run a normal `nova list` (which will go through the OpenStack
API, not the EC2 API), what is the difference in performance? If you
see the same 9s vs. 53s performance, it's definitely not the EC2 API
that is at fault.

Best,
-jay

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Neutron Network DNS Suffix via DHCP

2017-03-30 Thread Jay Pipes

Thanks for the update, Oisin, this is very useful! :)

Best,
-jay

On 03/28/2017 08:39 PM, Oisin O'Malley wrote:


There was 2 separate issues to resolve;

Firstly Nova was appending the default domain name .novalocal to the hostname 
it presents via the meta-data service. This can be resolved by setting 
dhcp_domain to an empty string in nova.conf on the Control node. For instance 
'dhcp_domain='. An instances name can now be set to a FQDN which will then be 
passed cloud-init via the metadata server.

Secondly, The Neutron DHCP service sets the default DNS suffix for a NIC to openstacklocal . 
This causes delays in DNS lookups on external DNS servers, as the wrong domain is used by 
default. Similarly to the above, this can be resolve by setting 'dhcp_domain=' in the Neutron 
DHCP config file dhcp_agent.ini. Once this is set and the DHCP service restarted, the 
"--domain=" parameter no longer gets set
Oisin O'Malley
Systems Engineer
Iocane Pty Ltd
763 South Road
Black Forest SA 5035

Office:+61 (8) 8413 1010
Fax:+61 (8) 8231 2050
Email:oisin.omal...@iocane.com.au
Web:www.iocane.com.au

Better for business

on the DHCP agents dnsmasq and no default search suffix gets passed via DHCP.

Setting dns_domain Neutron network attribute, appears to do nothing at the 
moment.

Regards,
Oisin


On 03/26/2017 11:49 PM, Matthew Taylor wrote:
Responded off-list.

For the benefit of the community, would one of you care to repeat the answer 
on-list please?

Thanks!
-jay

On 27/3/17 14:22, Oisin O'Malley wrote:

Hi All,

What is the correct way to set an instances DNS search suffix via
DHCP, currently all instances receive the default openstacklocal  DNS
search space.  We are using OpenStack Newton with Neutron Networking.

Setting dhcp_domain in dhcp_agent.ini will set the value globally for
all networks, which is of little use as we host many Windows VMs with
their own domains and DNS servers. Whatever is set as dhcp_domain is
passed to Neutron DHCP Agent dnsmasq subprocess via a
--domain= parameter.

With the Neutron DNS extension enabled, you can set the a networks
dns_domain attribute with "neutron net-update --dns-name", though
this attribute appears to be ignored by the DHCP server. Can this be
used to specify the DNS search space, if so how can it be configured?
I need to be able to configure this on a per network/subnet level.

Regards,
Oisin

Oisin O'Malley
Systems Engineer
Iocane Pty Ltd
763 South Road
Black Forest SA 5035

Office:+61 (8) 8413 1010
Fax:+61 (8) 8231 2050
Email:oisin.omal...@iocane.com.au
Web:www.iocane.com.au


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Troubles with networking part of openstack

2017-03-28 Thread Jay Pipes

+kevin benton

On 03/28/2017 07:20 AM, Bartłomiej Solarz-Niesłuchowski wrote:

Dear List,

I am beginner of openstack user.


Welcome to the OpenStack community! :)


I setup openstack with RDO on Centos 7.

I have 6 machines:

they have two interfaces enp2s0f0  (10.51.0.x) and enp2s0f1 (213.135.46.x)

on machine x=1 i setup dashboard/neutron-server/nova/cinder/etc.. on
machines 2-6 i setup:

openstack-cinder-api.service,
openstack-cinder-scheduler.service,
openstack-cinder-volume.service,
openstack-nova-api.service,
openstack-nova-compute.service,
openstack-nova-conductor.service,
openstack-nova-consoleauth.service,
openstack-nova-novncproxy.service,
openstack-nova-scheduler.service


I am presuming you want machines 2-6 as "compute nodes" to put VMs on? 
If so, you definitely do not want to put anything *other* than the 
following on those machines:


openstack-cinder-volume.service
openstack-nova-compute.service

All the other services belong on a "controller node", where you've put 
the neutron server, dashboard, database, MQ, etc.



I run the virtual machine instance which have ip 10.0.3.4 (on machine 5)

I setup router on machine 1

I can ping from the virtual instance ip of router.

I see pings from wirtual machine on machine 1 (where sit router)


err, it looks to me that your machine 1 is a controller, not a compute 
node? VMs should go on machines 2-6, unless I'm reading something 
incorrectly.



But i have totally no idea how to setup network connectivity with
outside world.





[root@song-of-the-seas-01 ~(keystone_admin)]# ip ro
default via 213.135.46.254 dev br-ex


So here is your default gateway, on br-ex...


10.51.0.0/24 dev enp2s0f0  proto kernel  scope link  src 10.51.0.1
213.135.46.0/24 dev br-ex  proto kernel  scope link  src 213.135.46.180

[root@song-of-the-seas-01 ~(keystone_admin)]# ip a | grep state
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
2: enp2s0f0:  mtu 1500 qdisc mq state
UP qlen 1000
3: enp2s0f1:  mtu 1500 qdisc mq master
ovs-system state UP qlen 1000
4: ovs-system:  mtu 1500 qdisc noop state DOWN qlen
1000
5: br-int:  mtu 1500 qdisc noop state DOWN qlen 1000
6: br-ex:  mtu 1500 qdisc noqueue state
UNKNOWN qlen 1000


And here, it's indicating br-ex is in an unknown state. Also, br-int is 
in DOWN state, not sure if that is related. My guess would be to bring 
up br-ex and see what is failing about the bring-up.


Of course, I'm no networking expert so hopefully one of the Neutron devs 
can pop in to help. :)


Best,
-jay


7: vxlan_sys_4789:  mtu 65470 qdisc
noqueue master ovs-system state UNKNOWN qlen 1000
8: br-tun:  mtu 1500 qdisc noop state DOWN qlen 1000

[root@song-of-the-seas-01 ~(keystone_admin)]# tcpdump -i vxlan_sys_4789
tcpdump: WARNING: vxlan_sys_4789: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vxlan_sys_4789, link-type EN10MB (Ethernet), capture size
65535 bytes
13:18:26.060031 IP 10.0.3.4 > see-you-later.wsisiz.edu.pl: ICMP echo
request, id 3713, seq 36985, length 64
13:18:27.060032 IP 10.0.3.4 > see-you-later.wsisiz.edu.pl: ICMP echo
request, id 3713, seq 36986, length 64
13:18:28.060057 IP 10.0.3.4 > see-you-later.wsisiz.edu.pl: ICMP echo
request, id 3713, seq 36987, length 64
13:18:29.060006 IP 10.0.3.4 > see-you-later.wsisiz.edu.pl: ICMP echo
request, id 3713, seq 36988, length 64




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Neutron Network DNS Suffix via DHCP

2017-03-27 Thread Jay Pipes

On 03/26/2017 11:49 PM, Matthew Taylor wrote:

Responded off-list.


For the benefit of the community, would one of you care to repeat the 
answer on-list please?


Thanks!
-jay


On 27/3/17 14:22, Oisin O'Malley wrote:

Hi All,

What is the correct way to set an instances DNS search suffix via
DHCP, currently all instances receive the default openstacklocal  DNS
search space.  We are using OpenStack Newton with Neutron Networking.

Setting dhcp_domain in dhcp_agent.ini will set the value globally for
all networks, which is of little use as we host many Windows VMs with
their own domains and DNS servers. Whatever is set as dhcp_domain is
passed to Neutron DHCP Agent dnsmasq subprocess via a
--domain= parameter.

With the Neutron DNS extension enabled, you can set the a networks
dns_domain attribute with "neutron net-update --dns-name", though this
attribute appears to be ignored by the DHCP server. Can this be used
to specify the DNS search space, if so how can it be configured? I
need to be able to configure this on a per network/subnet level.

Regards,
Oisin

Oisin O'Malley
Systems Engineer
Iocane Pty Ltd
763 South Road
Black Forest SA 5035

Office:+61 (8) 8413 1010
Fax:+61 (8) 8231 2050
Email:oisin.omal...@iocane.com.au
Web:www.iocane.com.au

Better for business

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] EC2API Very Slow

2017-03-27 Thread Jay Pipes

On 03/26/2017 04:06 PM, Georgios Dimitrakakis wrote:

Hello,

can someone let me know if it's an expected behavior the EC2API to be
very slow in Ocata?

I have an old installation of OpenStack (Icehouse) with NOVA-EC2 and
when requesting an instance's info I am getting them back in 9sec.

In a newer Ocata installation with EC2API it takes around 53sec.

On a hardware perspective view the newer version is on a far better (in
terms of specifications) server.

Any ideas what might be the problem and how to resolve it? Or any ideas
how to debug it??


Both 9s and 53s are appalling performance for getting an instance's 
info. Something is definitely amiss.


To determine the cause of the slowdown, first try to eliminate various 
potential components. You say it's the EC2 API that is slow. If you run 
a normal `nova list` (which will go through the OpenStack API, not the 
EC2 API), what is the difference in performance? If you see the same 9s 
vs. 53s performance, it's definitely not the EC2 API that is at fault.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Placement service credentials do not work

2017-03-16 Thread Jay Pipes

The error is:

On 03/16/2017 07:01 AM, Vikash Kumar wrote:

Placement service credentials do not work


Check that the user "placement" in the project "service" having the 
password "testetst" can access the Keystone authentication endpoint at 
"http://10.1.110.98:5000";.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] devstack and openbaton integration

2017-03-08 Thread Jay Pipes

On 03/08/2017 01:15 PM, Antonio Cappuccio wrote:

Hi all, we are configuring openbaton on top of openstack.

We have installed devstack and openbaton on the same VM, with ubuntu 14.04.

Both openbaton and openstack dashboards look fine, so we think we have
installed both products in the right way. Now we are trying to integrate
openstack and openbaton for working together.

When we try to create a new VIM instance, in order to try to link
openbaton with openstack, we get back the following error: /Not listed
Images successfully of VimInstance test. Caused by:
org.openbaton.exceptions.VimDriverException: Unauthorized Code: 422/

Below you can also find the configuration params for openbaton VIM
instance (getting us the error above)

  * PoP Name: test
  * URL: http://localhost:5000/v2.0 (please note that we have checked
the URL from devstack dashboard)
  * Tenant: demo (please note that we have checked the value from
devstack dashboard)
  * Username: demo Password: */*/***
  * Type: openstack Key Pair: ? which value should we set here? 


I don't know anything about openbaton, but the above is asking for the 
name of the keypair to use for the user "demo" when openbaton uses your 
OpenStack installation to launch instances.


You should go to the Keypairs tab when logged in as "demo" to the 
Horizon OpenStack dashboard and create a keypair and then supply the 
name of that keypair in your config above.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Query Reg. Mirantis Community Version

2017-03-06 Thread Jay Pipes

On 03/06/2017 06:05 AM, Raja T Nair wrote:

Hi,

Can I ask queries about Mirantis community version on this list?
If not, can somebody point to an appropriate link?


Hi Raja,

There's no such thing as Mirantis Community version. Are you referring 
to OpenStack Fuel? Perhaps the Mirantis OpenStack packages? Something 
else entirely?


Please elaborate :)

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ephemeral disks location

2017-01-25 Thread Jay Pipes

On 01/25/2017 03:19 AM, Eugen Block wrote:

All these instances are in our ceph cluster.

The instance path is defined in nova.conf:

# Where instances are stored on disk (string value)
instances_path = $state_path/instances

If one compute node fails but it's able to initiate a migration, the
same instance directory is created on the new host and the disks are
copied to its new compute node.


Why would ephemeral instance disks be copied if the backing store is a 
shared system like Ceph. There would be no need to copy a disk image 
since the destination host's /var/lib/nova/instances directory is 
exactly the same as the source's, right?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Accessing instance.flavor.projects fails due to orphaned Flavor

2017-01-12 Thread Jay Pipes

On 01/12/2017 05:31 AM, Balazs Gibizer wrote:

Hi,

The flavor field of the Instance object is a lazy-loaded field and the
projects field of the Flavor object is also lazy-loaded. Now it seems to
me that when the Instance object lazy loads instance.flavor then the
created Flavor object is orphaned [1] therefore instance.flavor.projects
will never work and result in an exceptuion: OrphanedObjectError: Cannot
call _load_projects on orphaned Flavor object.

Is the Flavor left orphaned by intention or it is a bug?


Depends :) I would say it is intentional for the most part. Is there a 
reason why the Flavor *notification* payload needs to contain a list of 
projects associated with the flavor? My gut says that information isn't 
particularly germane to the relationship of the Instance to the Flavor?



The payload of instance. notifications contains the flavor
related data of the instance in question and to have the flavor.projects
in the payload as well the code would need to access the projects field
via instance.flavor.projects.


Sure, I understand it would ease the access to the projects field in the 
notification payload packing, but is there really a reason to bother 
retrieving and sending that data each time an Instance notification 
event is made (which is quite often)?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] [Swift] Is it possible to create chunked upload?

2016-08-23 Thread Jay Pipes

On 08/23/2016 10:13 AM, Alexandr Porunov wrote:

Hello,

My server accepts files in chunks (4 Kbytes each chunk. File's size can
be till 8 GB). Is it possible somehow store those chunks in Swift like a
single file? Does somebody know any solution to solve this problem?


Yes, you can do this.

The Glance Swift driver shows an example of how to do this:

https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L829-L944

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova scheduler - change server group name

2016-07-20 Thread Jay Pipes

On 07/20/2016 10:13 AM, Frank Ritchie wrote:

Hi all

Does anyone know if it is safe to change the name of a Nova Scheduler
server group directly in the database?


Yeah, should be safe to do this. instance_groups.name is a non-unique 
column that isn't used for indexes, lookups or really anything other 
than translating the initial request in nova boot to an instance group ID.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Jay Pipes

On 07/08/2016 09:59 AM, Brent Troge wrote:

context - high performance private cloud with cpu pinning

Is it possible to map vCPUs to specific pCPUs ?
Currently I see you can only direct which vCPUs are mapped to a specific
NUMA node

hw:numa_cpus.0=1,2,3,4

However, to get even more granular, is it possible to create a flavor
which maps vCPU to specific pCPU within a numa node ?

Something like:
hw:numa_cpus.-=

hw:numa_cpus.0-1=1
hw:numa_cpus.0-2=2


I presume you have more than a single compute node in your deployment?

If you had a system that had a flavor pin vCPUs to individual pCPUs, you 
would essentially be stating that an instance launched with that flavor 
would always consume that particular pCPU set.


In that case, what you have isn't a cloud, which is defined by a 
virtualized, API-driven, hardware-abstracted, service-driven system; 
what you have is just a specialized hardware appliance over which you 
are layering a REST API that wasn't designed for that appliance.


The NUMA placement policy controls that exist in Nova already allow a 
(sometimes ludicrous) amount of hardware-specific control for what 
should be a virtualized abstraction of that hardware. I personally don't 
care to see our abstraction layer any further destroyed with 
hardware-specific interfaces.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] exchange between a guest and its compute node

2016-06-20 Thread Jay Pipes

On 06/20/2016 02:26 AM, Jean-Pierre Ribeauville wrote:

Hi,

Is there any way to for an instance to send any data to the compute node
  ( for my purpose a status byte is enough) ?


Generally, no, we don't want guests to be able to communicate with the 
host via open channels. If you're looking for a way for a host to 
respond to certain guest health/status conditions, you might want to 
look at using a libvirt watchdog device. More information on this can be 
found here:


https://wiki.openstack.org/wiki/LibvirtWatchdog

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to set a host aggregate in several AZ via the GUI

2016-06-15 Thread Jay Pipes

On 06/15/2016 03:06 AM, Jean-Pierre Ribeauville wrote:

Hi,

Is it possible to add a same aggregate in several AZ  via the Horizon GUI ?


No this is not possible. An aggregate may only belong to a single AZ.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova service-disable

2016-02-22 Thread Jay Pipes

On 02/22/2016 05:38 AM, Yngvi Páll Þorfinnsson wrote:

Hi

I want to stop instances from beeing created on one of our compute
nodes, i.e. „*compute1*“

But I want all current instances on *compute1* to be active and available.

I thus disable the nova service for this node:

# nova service-disable *compute1* nova-compute

Nova status of *compute1* will then be disabled.

But the state of *compute1* will still be UP.

*/My question is:/*

What about the current instances on this compute node?

Will they still be active and available?


Yep. :)

That nova compute node will simply be removed from future scheduling 
until you re-enable the service. Otherwise, there is no other effect.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Error Returned by `nova list-extensions` Command

2015-12-06 Thread Jay Pipes

On 12/03/2015 02:24 AM, Richard Raseley wrote:

I am tracking down an issue I am having in Horizon ( HTTPD output
http://paste.openstack.org/show/480704/ ) which lead me to looking at
the Nova extensions. When I try to execute a `nova list-extensions`
command with the debug flag, I get the following error output (
http://paste.openstack.org/show/480702/ ).

All Nova services appear to be running and healthy. I somewhat
suspect that I have a malformed endpoint for nova, as it doesn’t
include ‘extensions’ as part of the URI in the `nova —debug
list-extensions` output. Any assistance would be appreciated.

This is OpenStack Kilo using RDO packaging.


Hi Richard,

If you will note the following in your debug output:

"http://openstack-test.domain.local:8774/v2/v2.0/d0064a4d07594a4fb93bfe7b15fbdfef";

It looks like you are either improperly setting the base compute API URI 
manually when calling the novaclient or the service catalog improperly 
contains a doubled-up "v2/v2.0/" part of the URI. That should just be 
"v2/" not "v2/v2.0".


If you do a `keystone catalog`, what do you see in the returned results 
for the compute service type?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] keypair description in documentation wrong?

2015-10-16 Thread Jay Pipes

On 10/16/2015 11:08 AM, Chris Friesen wrote:


Someone recently asked me a question about keypairs and multiple users
and I thought I'd clarify a few things:

1) Each keypair is associated with a specific user.

2) A user cannot see a keypair belonging to another user.

3) If a user is part of multiple projects then any keypair owned by that
user is available to them regardless of what project they're currently
using.

Are the above correct?


Yes.


However, the above information doesn't seem to be explicitly documented
anywhere.

For example,
"http://docs.openstack.org/user-guide/cli_nova_configure_access_security_for_instances.html";
says, "You can create at least one key pair for each project. You can
use the key pair for multiple instances that belong to that project."
Note the fact that it's talking about the project, not the user.

Similarly,
"http://docs.openstack.org/user-guide/configure_access_and_security_for_instances.html";
says, "Each project should have at least one key pair.The key pair
can be used for multiple instances that belong to a project."  Later on
it says "Create at least one key pair for each project.".  Again,
project rather than user.


Yes, I believe the documentation should be fixed to focus on the user 
owner not the project/tenant. Please do file a doc bug.


Thanks Chris!
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack][nova-scheduler]

2015-10-01 Thread Jay Pipes

On 10/01/2015 04:38 PM, Rahul Cheyanda wrote:

Hello,

I had a question regarding utilization-aware-scheduling,

- is network utilization considered for scheduling ? (in Stable/Kilo ?
or in Stable/Liberty?)


No, it is not.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] New instances booting time

2015-08-14 Thread Jay Pipes

On 08/13/2015 11:37 PM, Ivan Derbenev wrote:

*From:*Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
*Sent:* Wednesday, August 5, 2015 1:21 PM
*To:* openstack@lists.openstack.org
*Subject:* [Openstack] New instances booting time

Hello, guys, I have a question

We now have OS Kilo + KVM+ Ubuntu 14.04

Nova-compute.conf:

[libvirt]

virt_type=kvm

images_type = lvm

images_volume_group =openstack-controller01-ky01-vg

volume_clear = none

the problem is when nova boots a new instance it’s SUPER slow

exactly this step:

Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf
qemu-img convert -O raw
/var/lib/nova/instances/_base/999f7fff2521e4a7243c9e1d21599fd64a19b42e
/dev/openstack-controller01-ky01-vg/5f831046-435c-4636-8b71-a662327b608c_disk

Well, I understand what this step is doing – it’s copying raw image to lvm.

How can we speed it up?

I don’t wanna have instance from 100GB image booted for 4 hours


Don't use base images that are 100G in size. Quite simple, really.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova migrate-flavor-data woes

2015-07-27 Thread Jay Pipes

On 07/26/2015 01:15 PM, Lars Kellogg-Stedman wrote:

So, the Kilo release notes say:

 nova-manage migrate-flavor-data

But nova-manage says:

 nova-manage db migrate_flavor_data

But that says:

 Missing arguments: max_number

And the help says:

 usage: nova-manage db migrate_flavor_data [-h]
   [--max-number ]

Which indicates that --max-number is optional, but whatever, so you
try:

 nova-manage db migrate_flavor_data --max-number 100

And that says:

 Missing arguments: max_number

So just for kicks you try:

 nova-manage db migrate_flavor_data --max_number 100

And that says:

 nova-manage: error: unrecognized arguments: --max_number

So finally you try:

 nova-manage db migrate_flavor_data 100

And holy poorly implement client, Batman, it works.


LOL. Well, the important thing is that the thing eventually worked. ;P

In all seriousness, though, yeah, the nova-manage CLI tool is entirely 
different from the main python-novaclient CLI tool. It's not been a 
priority whatsoever to clean it up, but I think it would be some pretty 
low-hanging fruit to make the CLI consistent with the design of, say, 
python-openstackclient...


Perhaps something we should develop a backlog spec for.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] MySQL IO performance

2015-06-29 Thread Jay Pipes

On 06/29/2015 08:25 AM, Narayanan, Krishnaprasad wrote:

Hi Jay,

The MySQL database version on both the VMs share the same version which is "mysql  
Ver 14.14 Distrib 5.5.40, for debian-linux-gnu (x86_64) using readline 6.3". The 
my.cnf settings are the same on both the VMs.


Please see my comment about too many variables changing between 
environments to make it possible to determine causal relationships 
between the environment and the performance degradation.


Best,
-jay


Regards,
Krishnaprasad

-Original Message-----
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Montag, 29. Juni 2015 14:05
To: openstack@lists.openstack.org
Subject: Re: [Openstack] MySQL IO performance

On 06/29/2015 07:19 AM, Narayanan, Krishnaprasad wrote:

Hello all,

I ran tests under the following settings to measure the IO performance
of MySQL database. I used Sysbench as the client workload generator. I
found that the performance of MySQL (both resource utilization and
application) has degraded by more than 50% after switching from
setting
a) to setting b).

Setting a): Controller & Compute node OS - Ubuntu 12.04, cloud software:
OpenStack Havana, networking: nova-network, Hypervisor: KVM, Libvirt
version: 1.1.1, QEMU version: 1.5.0, Host disk write cache: enabled,
Guest disk cache: none and host OS scheduler: CFQ.

Setting b): Controller & Compute node OS - Ubuntu 14.04, cloud software:
OpenStack Icehouse, networking: Neutron, Hypervisor: KVM, Libvirt
version: 1.2.2, QEMU version: 2.0.0, Host disk write cache: enabled,
Guest disk cache: none and host OS scheduler: CFQ.

May I know has anybody performed such tests and if yes, can you please
share details on the IO performance of VMs and the application?

I don't know whether is the right question to ask in this forum. Can
somebody share information at a high level about the improvements made
in Libvirt (from version 1.1.1 to 1.2.2) for handling IO requests?


Since you changed so many variables between setting a) and b), there's no 
possible way to say which one of those changes resulted in the degraded 
performance of MySQL.

You also have not given the different versions of MySQL you were running, nor 
provided the diffs of the my.cnf settings you used...

I'd recommend trying to reduce the delta of your environment changes and then 
re-running sysbench.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] MySQL IO performance

2015-06-29 Thread Jay Pipes

On 06/29/2015 07:19 AM, Narayanan, Krishnaprasad wrote:

Hello all,

I ran tests under the following settings to measure the IO performance
of MySQL database. I used Sysbench as the client workload generator. I
found that the performance of MySQL (both resource utilization and
application) has degraded by more than 50% after switching from setting
a) to setting b).

Setting a): Controller & Compute node OS - Ubuntu 12.04, cloud software:
OpenStack Havana, networking: nova-network, Hypervisor: KVM, Libvirt
version: 1.1.1, QEMU version: 1.5.0, Host disk write cache: enabled,
Guest disk cache: none and host OS scheduler: CFQ.

Setting b): Controller & Compute node OS - Ubuntu 14.04, cloud software:
OpenStack Icehouse, networking: Neutron, Hypervisor: KVM, Libvirt
version: 1.2.2, QEMU version: 2.0.0, Host disk write cache: enabled,
Guest disk cache: none and host OS scheduler: CFQ.

May I know has anybody performed such tests and if yes, can you please
share details on the IO performance of VMs and the application?

I don’t know whether is the right question to ask in this forum. Can
somebody share information at a high level about the improvements made
in Libvirt (from version 1.1.1 to 1.2.2) for handling IO requests?


Since you changed so many variables between setting a) and b), there's 
no possible way to say which one of those changes resulted in the 
degraded performance of MySQL.


You also have not given the different versions of MySQL you were 
running, nor provided the diffs of the my.cnf settings you used...


I'd recommend trying to reduce the delta of your environment changes and 
then re-running sysbench.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Scheduler pack VMs on a single host

2015-06-09 Thread Jay Pipes

On 06/09/2015 06:49 PM, Georgios Dimitrakakis wrote:

Hi all!

I would like to know if it's possible to pack as many VMs as possible
(based on the available resources) on one host
before populating another.

What I have seen so far is that by default it tries to balance the
available VMs on different hosts.


Yup, by default, Nova will "spread" the VMs across hosts. If you want to 
"pack" VMs, then you need to set the following configuration option:


ram_weight_multiplier = -1.0

The default value is 1.0, which spreads. Negative numbers will pack.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] VMs not terminated correctly and undeletable, with cinder and CEPH

2015-06-05 Thread Jay Pipes

Hi Romain,

I think you may be experiencing this bug:

https://bugs.launchpad.net/nova/+bug/1447884

with a fix currently up:

https://review.openstack.org/#/c/177084/

Best,
-jay

On 06/05/2015 04:02 AM, Aviolat Romain wrote:

Dear Openstack community

I have a strange intermittent problem with cinder when VMs with volumes 
attached are terminated. Sometimes the process doesn't work and the VMs are 
stuck into and undeletable state.

Some info about my setup:
*
* Openstack Juno from ubuntu-cloud repo
* Ubuntu 14.04 LTS on every machines
* CEPH (giant) as storage backend on the storage nodes
* 3x controllers in HA (HA-proxy, Galera, CEPH-monitors...)
* RabbitMQ as messaging system
*

I put my whole setup in debug mode and here's what happened for a problematic 
VM:

Info about the VM and its volume:
*
Vm: suricata-bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc
Volume attached: 821006f1-c655-4589-ba3e-7c6804c8e120 on /dev/vda
Volume size: 500GB
Running on hypervisor: cloudcompute2
Libvirt instance ID: instance-02e3
*

To begin here's the VM action log from Horizon, we can see that the user 
deleted the VM at 1.23 PM:
*
Request ID  Action  
Start TimeUser ID   
  Message
req-22f33ed5-a231-4ea2-95fa-0022bf731079deleteJune 4, 2015, 1:23 p.m.   
 27cf3aaa0d7942009b03eabf7f686849Error
*

Here's the corresponding action on the controller that received the request:
*
2015-06-04 13:23:16.693 32302 DEBUG nova.compute.api 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] [instance: 
bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc] Going to try to terminate instance delete 
/usr/lib/python2.7/dist-packages/nova/compute/api.py:1802
2015-06-04 13:23:16.774 32302 DEBUG nova.quota 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] Created reservations 
['2221d555-1db0-4c36-9bd6-6dc815c22fc9', 
'3b70b444-7633-4a6b-b8b2-16128ee1469c', '6b8b2421-cfc1-4568-8bd8-430a982d977f'] 
reserve /usr/lib/python2.7/dist-packages/nova/quota.py:1310
2015-06-04 13:23:16.799 32302 INFO nova.osapi_compute.wsgi.server 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] 172.24.1.17 "DELETE 
/v2/c9324c924a5049e7922882aff55c3813/servers/bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc 
HTTP/1.1" status: 204 len: 179 time: 0.1563590
2015-06-04 13:23:16.883 32296 DEBUG nova.api.openstack.wsgi 
[req-903c3ea2-151a-4f8c-a48a-6991721c8c18 None] Calling method '>' _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:937
2015-06-04 13:23:16.884 32296 DEBUG nova.api.openstack.compute.servers 
[req-903c3ea2-151a-4f8c-a48a-6991721c8c18 None] Removing options 'project_id, 
limit' from query remove_invalid_options 
/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py:1533
*

Then on nova-compute on the compute node hosting the VM, we receive the 
DELETE-instance request:
*
2015-06-04 13:23:16.847 1971 AUDIT nova.compute.manager 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] [instance: 
bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc] Terminating instance
2015-06-04 13:23:16.854 1971 INFO nova.virt.libvirt.driver [-] [instance: 
bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc] Instance destroyed successfully.
*

Then the network device is removed:
*
2015-06-04 13:23:17.558 1971 DEBUG nova.network.linux_net 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] Net device removed: 
'qvo20393401-b2' delete_net_dev 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py:1381
*

The the libvirt config file is removed:
*
2015-06-04 13:23:18.048 1971 INFO nova.virt.libvirt.driver 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] [instance: 
bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc] Deleting instance files 
/var/lib/nova/instances/bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc_del
*

The next one is strange:
*
2015-06-04 13:23:19.303 1971 DEBUG nova.virt.libvirt.driver 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] [instance: 
bbd3c3f5-85fb-4c90-8e17-e7f083a4a0bc] Could not determine fibre channel world 
wide node names get_volume_connector 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:1280
2015-06-04 13:23:19.358 1971 DEBUG nova.volume.cinder 
[req-22f33ed5-a231-4ea2-95fa-0022bf731079 None] Cinderclient connection created 
using URL: http://haproxy-api:8776/v2/c9324c924a5049e7922882aff55c3813 
get_cinder_client_version 
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py:255
*

We can see that the instance was correctly removed from the computenode:
*
[CloudCompute2:~] 1 $ sudo virsh list --all
   IdName   State

   31instance-043b  running
   32instance-02d4  

Re: [Openstack] [Glance] Images

2015-06-04 Thread Jay Pipes

It is below.

http://docs.openstack.org/image-guide/content/ch_creating_images_manually.html

Best,
-jay

On 06/04/2015 08:14 PM, Michael Lindner wrote:

-Original Message-
From: Michael Lindner 
To: Muhammed Salehi 
Sent: Fri, 05 Jun 2015 9:41 AM
Subject: Re: [Openstack] Glance Images

Thanks for those suggestions, unfortunately I'm in a corporate
environment where I am not able to just throw out images and software
created by any random company and need to have admin teams build the SOE
according to very rigid rules.

Can I ask once more, in this context, perhaps directed more to the
people who have very kindly contributed images so far, for suggestions
on what to read.

Thanks again.



-Original Message-
From: Muhammed Salehi 
To: Jay Pipes 
Cc: "openstack@lists.openstack.org" 
Sent: Fri, 05 Jun 2015 6:44 AM
Subject: Re: [Openstack] Glance Images

I guess whom you want to create optimize images with minimum size in
order to register them in glance, like official images size (e.g. Fedora
image: ~151 MB ).

However, you can't achieve that point of science until you don't know
what is FileSystem and How does it works.
Read about these:

  * FileSystems
  o Ext4, Ext3, Ext2, XFS, JFS, ZFS, UnixFS
  o VFS, FUSE, inode, superblock



Cheers,

Muhammed


On Fri, Jun 5, 2015 at 12:00 AM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

Unrelated to the Glance Images building thing, but Jose, I wanted to
point you to the Bifrost project, which does standalone Ironic with
Ansible :)

https://github.com/juliakreger/bifrost/blob/master/README.rst

Might be something you and Julia Kreger could collaborate on?

Best,
-jay

On 06/04/2015 09:16 AM, José Riguera López wrote:

Hi,

Have a look here:

http://docs.openstack.org/image-guide/content/ch_creating_images_manually.html

Also, I am creating some documentation (still not finished) and
I have
just written a small manual about howto create images for Ironic
(but
the process is quite similar):

https://github.com/jriguera/ansible-ironic-standalone/wiki/Creating-Images-Manually

Regards,

2015-06-04 14:34 GMT+02:00 Michael Lindner mailto:mich...@tropyx.com>
<mailto:mich...@tropyx.com <mailto:mich...@tropyx.com>>>:

 Does anyone have a link to best-practice image creation for
 openstack glance?

 Eg how to take a windows/linux install and turn it into a
usable
 generic image to create instances from.

 Thanks.

 ___
 Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
 <mailto:openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>>
 Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
José Riguera mailto:jrigu...@gmail.com>
<mailto:jrigu...@gmail.com <mailto:jrigu...@gmail.com>>>


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
<mailto:openstack@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Glance Images

2015-06-04 Thread Jay Pipes
Unrelated to the Glance Images building thing, but Jose, I wanted to 
point you to the Bifrost project, which does standalone Ironic with 
Ansible :)


https://github.com/juliakreger/bifrost/blob/master/README.rst

Might be something you and Julia Kreger could collaborate on?

Best,
-jay

On 06/04/2015 09:16 AM, José Riguera López wrote:

Hi,

Have a look here:
http://docs.openstack.org/image-guide/content/ch_creating_images_manually.html

Also, I am creating some documentation (still not finished) and I have
just written a small manual about howto create images for Ironic (but
the process is quite similar):
https://github.com/jriguera/ansible-ironic-standalone/wiki/Creating-Images-Manually

Regards,

2015-06-04 14:34 GMT+02:00 Michael Lindner mailto:mich...@tropyx.com>>:

Does anyone have a link to best-practice image creation for
openstack glance?

Eg how to take a windows/linux install and turn it into a usable
generic image to create instances from.

Thanks.

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org

Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
José Riguera mailto:jrigu...@gmail.com>>


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Production grade guidelines

2015-03-31 Thread Jay Pipes

On 03/31/2015 02:23 AM, somshekar kadam wrote:

Any pointers or link to make openstack production grade/guidelines.


This question is too broad to answer. What do you consider "production 
grade"? Are you referring to resiliency? Scale? What workloads run on 
the cloud? Storage size? Throughput? Public? Private? Lots of tenants?


-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova network and neutron

2015-03-04 Thread Jay Pipes

On 03/04/2015 09:00 AM, jankihchhat...@gmail.com wrote:

‎Hi

My colleague and me got into a discussion today about nova network. From
my understanding, a setup is said to be neutron if it has Neutron agents
installed and running‎ and not that it is a three node architecture
meaning it has 3 physical hardware. And a set up is using nova network
if it doesn't have neutron agents running and not that it is a 2 node
architecture.

Number of nodes needed depends on the configurations of the physical
hardware. Meaning we have neutron setup with 2 nodes also..

Or is it that 2 node setup is nova network and 3 node is neutron?


The difference has nothing to do with how many nodes your deployment 
has. Neutron is a stand-alone L2 and L3 network management service. 
nova-network is the built-into-Nova network management service. Neutron 
is intended to eventually replace nova-network.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Unable to create snapshots.

2015-02-09 Thread Jay Pipes

On 02/08/2015 11:39 PM, Vijaya Bhaskar wrote:

Please guys,  Any ideas. I have not been able to fix the issue till now.

On Fri, Feb 6, 2015 at 1:33 PM, Vijaya Bhaskar
mailto:vijayabhas...@sparksupport.com>>
wrote:

Hi all,

I have an openstack setup with ceph as the storage backend and
linuxbridge plugin for networking. When I try to take a snapshot of
an instance that is running, the image is getting created, but it
gets deleted as soon as the creation starts. When I checked the nova
logs I found these:


Does this happen if you stop the instance and then snapshot it?

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStackClient] Adding Image 'member' commands

2015-01-06 Thread Jay Pipes

On 01/02/2015 03:10 PM, Dean Troyer wrote:

Glance has the concept of 'image members' as the mechanism for sharing
images between projects.  We need to add this to OSC and I'd like to
work out the commands/options to do so  A while back, markwash and I sat
down and sorted a simple set of additions to support the member
operations for both Image v1 and v2 APIs.  I promptly went on and
dropped this particular ball.

I suppose first of all, does there remain a need/desire to add these for
Image API v1?


Yes. There's many shops that still use the v1 Glance API. And, Nova 
itself, has little current support for v2 Glance API.


> The v2 set we came up with is much cleaner and I think is

highly preferable and if we can just leave OSC's Image v1 as-is I would
prefer to do that.

Conceptually, I see a shared image as an image with an attribute that is
a list of projects that it is shared with in addition to its home
(owner) project.  To maintain that list, two new options can be added to
'image create' and 'image set':

--share  - add  to the shared-with list for this image
--no-share  - remove  from the shared-with list
('image set' only)


Is image set equal to image update in v1 parlance?


Both --share and --no-share options may be repeated, much as the
--properties option works today.


I would prefer --share and --unshare or --add-member and 
--remove-member. Typically, --no- prefix on a CLI option means to 
disable a boolean option.



In addition, the 'receiving' project must ACK the sharing, which would
be an added option to 'image set':

--share ack - the magic value 'ACK' (case insensitive) signifies the
acceptance of a shared image by the 'receiving' project


IMHO. Would have been a bit nicer to have something like this:

 glance image member [confirm|decline]  []


A couple of new options are added to 'image list' to select shared images:

--shared - filter on shared images only
--project  - filter on  (this may imply --shared?)


Or:

 --shared-with=


Some of the questions I have:

* Is --no-share the correct antonym of --share?  --unshare maybe?  We
have a pattern of using regular English words were possible
(enable|disable) rather than the GNU style of prepending 'no-' to
options, but that is my current backup.


Prefer --unshare to --no-share, but prefer --add-member/--remove-member 
to either :)



* Do we need an 'un-ACK' option for a 'receiving' project to remove the
shared image from their list without requiring the owner project to do
so?  Is this even possible in the Image v2 API?


See suggestion above:

 glance image member decline  []

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack nova free disk issue

2015-01-06 Thread Jay Pipes

On 01/06/2015 03:56 AM, ppnaik wrote:

Hi All,
I have a multi node setup of openstack juno on centos 7. After I
instantiate multiple VMs and check the nova-compute.log on compute nodes
it shows a negative value for free disks even though the the physical
system has a lot of free memory


I think you meant free *disk* :)


on the physical system. df -h

Filesystem   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G  2.0G   49G   4% /
devtmpfs  16G 0   16G   0% /dev
tmpfs 16G 0   16G   0% /dev/shm
tmpfs 16G  281M   16G   2% /run
tmpfs 16G 0   16G   0% /sys/fs/cgroup
/dev/mapper/centos-home  865G   33M  865G   1% /home
/dev/sda1494M  137M  358M  28% /boot

nova-compute.log on compute node:

2015-01-06 11:00:28.756 8123 AUDIT nova.compute.resource_tracker [-]
Total physical ram (MB): 31911, total allocated virtual ram (MB): 17920
2015-01-06 11:00:28.756 8123 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): -113
2015-01-06 11:00:28.757 8123 AUDIT nova.compute.resource_tracker [-]
Total usable vcpus: 16, total allocated vcpus: 10

What is the issue and how can I resolve it? Please help.


If you log into your SQL database for Nova and run the following query, 
what does it say?


SELECT SUM(root_gb + ephemeral_gb) AS total_gb
FROM instances
WHERE node = $compute_node;

Replace $compute_node with the value of the compute node's 
"hypervisor_hostname" field in the compute_nodes table.


Also, what is the value of your nova.conf instances_path option? From 
looking at the above, it looks like you have /home partitioned to 
contain most of the disk space, and / only has 49G available. If the 
default instances_path value is used (/var/lib/nova/instances for 
Debian-based systems, not sure about CentOS), then you will be using 
that 49G / mount and not the 895G /home mount.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Use MYISAM storage engine

2014-12-28 Thread Jay Pipes

On 12/28/2014 04:36 PM, Hui Kang wrote:

Hi,
I have a MySQL database using MyISAM engine, instead of innodb. When I
install the openstack service, I can successfully create the database
such as keystone, glance.
However, when I run

su -s /bin/sh -c "keystone-manage db_sync" keystone

it reports the error

  CRITICAL keystone [-] NotSupportedError: (NotSupportedError) (1286,
"Unknown storage engine 'InnoDB'") 'ALTER TABLE `credential`
Engine=InnoDB' ()

I think the error is caused by the myisam database engine. So I am
wondering whether openstack support MyISAM storage engine. If so, how
can I configure? Thanks in advance.


No, please enable InnoDB and do not use the MyISAM storage engine.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack capabilities

2014-12-22 Thread Jay Pipes

On 12/22/2014 11:20 AM, Eriane Leobrera wrote:

Hi OpenStack,

I would really appreciate if anyone can assist me on my dilemma. Below
are the capabilities I am looking for. We are on the process of deciding
between OpenStack vs CloudStack. Capabilities are much more important
for us, integrated and having everything automated.

Here are the list below:


I've tried to give you honest answers from the OpenStack roadmap and 
currently-supported perspective below.



1.Integration with CRM 2013 (service tickets, lead/opportunity,
contact/account and billing system).


Not supported now. Not likely to ever be supported by OpenStack, too 
much of a custom feature.



2.Integration with payment gateway (Moneris)


Not supported now. Not likely to ever be supported by OpenStack, too 
much of a custom feature.



3.PCI Compliant solution


This is a giant rabbithole.


4.Audit trail enabled on all interactions (i.e server shutdown etc.)


OpenStack currently supports notification queues that can be used to 
provide audit capabilities.



5.Support buying a new VM, signing up, auto charging them by credit card
verification and auto provisioning VMs.


Not supported now. Not likely to ever be supported by OpenStack, too 
much of a custom feature for the user interface, which is something that 
OpenStack's upstream UI (Horizon) is unlikely to develop.



6.Upgrade existing VMs with ability to reboot VM to have resources added


Currently supported. This is the resize/migrate operation in nova.


7.Cancel VM which stops billing and scheduling for VM to be removed for
x days later


Not currently supported. Possibly supported some time in the future if 
we decide it's worthwhile to put time-based constraints and reservations 
into the scheduler.



8.Support for pay by the minute billing


OpenStack does not ship a billing solution. This is something that is 
the responsibility of the operator, since it's a *very* custom feature 
and almost always involves proprietary code linking.



9.Support for multiple data centre locations


This is currently supported.


10.Supports a DNS Manager


This is currently supported with the Designate component.


11.Ability to do a hard shutdown on VM


This is currently supported.


12.Ability to have console level access


This is currently supported.


13.Spin up new instance which wipes old instance and ability to reinstall


This doesn't really make any sense. This isn't cloud. This is bare-metal 
hosting you are describing.


OpenStack's VMs are hosted in the cloud -- i.e. virtualized. When you 
terminate a VM, you lose the data on the VM's ephemeral storage, which 
is why for data that you need to keep around, you use volumes (block 
storage).



14.Supports backup manager


The snapshot operation and daily/weekly/hourly backup operations are 
currently supported via OpenStack Nova's API. However, if you're looking 
for some Windows GUI that does backups, that isn't something that 
OpenStack is about to provide.



15.Ability to change password for panel login via email or security
questions etc.


Changing passwords is currently supported. Security questions are a 
UI-specific thing and not something that is built-into OpenStack's APIs.



16.VM management windows the shows RAM, CPU usage, IP, server name etc.
(dashboard)


This is currently supported.


17.Scheduled maintenance window that shows upcoming or passed (dashboard)


This is not supported.


18.Dashboard showing all VMs and current utilizations


This is currently supported.


19.One click install software packages for define package


Are you looking for an infrastructure service or a platform service? 
OpenStack's infrastructure services manage virtualized resources. 
Platform services, like the Murano project in Stackforge, can be used to 
interface with things like CloudFoundry to let you define software 
packages that would get installed on your virtual resources.



20.Monitoring management to turn on/off or silence alerts


This is not supported by OpenStack. This is something you can install 
yourself and use as you want.



21.Mobile support for rebooting VMs


This is not supported.


22.Security Threat Center


? We have a security advisory mailing list. But OpenStack is not Macafee 
Windows software.



23.Token tracking for resellers of our services


This is not supported.


I would really appreciate if anyone can take a time to put a yes/no/NA
next to each of the item on the list, it will definitely help me big
time. I tried reading and watching few videos but I would really like to
make sure as some of the items on the list are must haves.


It really sounds to me like you are looking for some all-in-one hosting 
solution, not really running your own cloud infrastructure. I'd 
recommend looking at just being a customer or reseller of one of the 
cloud providers like Rackspace Cloud, HP Cloud, Amazon Web Services, or 
Softlayer.


Best,
-jay


Thank you in advance.

Regards,

*Eriane Leobrera*

MANA

Re: [Openstack] Technical advantages of Openstack over Cloudstack

2014-12-08 Thread Jay Pipes
Hi Jordi, thank you SO much for this email. It is excellent feedback for 
our community and our developers. I've provided some comments inline, 
but overall just wanted to thank you for bringing some of these product 
needs to our attention.


On 12/03/2014 01:42 PM, Jordi Moles Blanco wrote:

Hi everyone,

I've been looking though the old messages in this list and I haven't
found this kind of information (sorry if it is present somewhere and I
couldn't find), so I decided to ask you because you are the experts on this.

We want to build a new cloud platform and we have been playing with both
options for a while.

There are plenty of articles where people give their opinion about which
stack technology is better, but they are more business-oriented than
technically-oriented.


Agreed. And, to be fair, we try not to promote the idea that there is 
always a one-size-fits-all solution for everybody's needs.


Both OpenStack and Cloudstack are solutions that work well for certain 
customers -- anybody who says one is a good solution and the other isn't 
is being dishonest or shallow.



I don't want to do that, I don't think there are good or bad players in
this game, just different options that you have to know very well before
you make your decision.


++


And that's why I'm asking you as Openstack experts. You see, I managed
to deploy a Cloustack 4.4.1 platform with 2 compute nodes (for
live-migration testing) in less than 2 hours, while it took me days to
deploy an Openstack infrastructure that was functional and sometimes it
just breaks and I have to reboot some nodes or redeploy with Fuel.


This is an extremely common complaint about OpenStack. That it is just 
too difficult to install and configure a simple OpenStack environment 
with common compute, block storage, and networking functionality.


I could sit here and say that this problem is due to the fact that 
OpenStack's community has embraced each and every configuration 
management system, deployment architecture, and package management 
platform and therefore the complexity you find is simply due to the 
dizzying array of options and flexibility offered by the ecosystem.


But, of course, that would be a complete cop-out and terrible excuse. 
The fact is, our installation and deployment story is currently overly 
complicated, inconsistently documented, and difficult for newcomers to 
get their heads around. That needs to be fixed.



I know, I'm just an inexperienced Openstack user, but that is one of my
points: For any company that wants to go all the way to Openstack, it
may inevitably face a big transformation and I don't think that everyone
is ready for that. Sure, you do that because you want to change, you
want to be able to provide infrastructure much faster, but there are
other options that don't mean such a big change.


Agreed.



What I do care about is having a platform that eases the process of vm
provisioning and at the same time is easy to install, configure and
maintain.

Both platforms do that, but I feel that in order to do that, you need to
have a group of highly trained people in Openstack whose only job is
keeping the infrastructure running, while due to Cloudstack
architecture, It doesn't seem like you need the same kind of expertise.


Yes, completely agreed. It's something we need to do much better at.


If you don't want to dedicate resources, you can always pay for a
managed Openstack solution, but then you are outsourcing your platform
and, again, not everyone is ready for that, both for culture and pricing.

I've also read several times that Openstack is a more mature project,
with more features than other projects.

Here are some thoughts:

-As for vm provisioning, they both do that.
-Cloudstack also has something similar to Ceilometer.
-Cloudstack network management is also able to provide Network As a
service: vpn, lb, etc.
-Support for several commercial hypervisors on both.
-Orchestration tools on top of the stack. It is true that Openstack
comes with things like Heat, Juju or Openshift, but you can also use
Juju with instances from Cloudtack and there are things like Cloudstack
integration in Vagrant.


To be clear, the only thing directly related to OpenStack is Heat.

Juju is a tool from Canonical that can be used to install/deploy 
applications in various VMs. OpenShift is a platform from Red Hat that 
provides an application container system for developers to deploy their 
applications into a cloud infrastructure.


There is OpenStack "integration" with Vagrant via various things like 
devstack-vagrant:


https://github.com/openstack-dev/devstack-vagrant


-Both can integrate well with Amazon.
-Things like deploying Hadoop with a click from Horizon is great, but it
is virtualized and not suitable for all needs. Also, you can deploy
Hadoop with Juju on Cloudstack vms.

Obviously, I know pretty well what we will do with the Cloud
infraestructure: vm provisioning that will allow us to sell services to
end user

Re: [Openstack] Scheduler Filters Ignored

2014-11-28 Thread Jay Pipes

On 11/28/2014 11:22 AM, Georgios Dimitrakakis wrote:

Jay,

you were right!

If I remove the "availability zone" parameter then filters are applied!!!

Do you know if this is an expected behavior?


Honestly, the way our filter scheduler works with regards to aggregates 
is so wonky that I wouldn't be surprised to learn that this is 
"expected". :(


Try adjusting the cpu_allocation_ratio and ram_allocation_ratio of the 
host aggregate that has the same availability zone in its metadata. That 
should allow you to get the same behaviour when you use the availability 
zone scheduler hint in the boot command.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Scheduler Filters Ignored

2014-11-28 Thread Jay Pipes

On 11/27/2014 02:29 PM, Georgios Dimitrakakis wrote:

Does it has anything to do with the fact that I am specifically
requesting that node through the availability zone parameter?


If you run the boot command without the availability zone hint, does it 
change the behaviour?


-jay



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] OpenStack Regions

2014-11-28 Thread Jay Pipes

On 11/28/2014 06:40 AM, Chris wrote:

Hello Robert,

thx for your answer! Does we need to create new admin/service tenants
for the new services in the new region or should we use the old ones?


It's much easier to use the same ones, in my experience.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova DB sync Operational error

2014-11-26 Thread Jay Pipes

On 11/26/2014 03:36 PM, Amit Anand wrote:

I also took a look at a command you ran and I tried it, would this be
correct output (notice I didnt specify a DB in command)?



Yep, that all looks correct to me. I'm a little unsure what else to 
investigate, frankly, Amit :( There's got to be *something* different 
between the connection information that is used by SQLALchemy to connect 
to the MySQL database and the connection information you are using to 
connect via the command line. I just don't know what it might be :(


-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova DB sync Operational error

2014-11-26 Thread Jay Pipes

On 11/26/2014 02:36 PM, Amit Anand wrote:

Same error - also tried with 127.0.0.1. Even crazier I removed all
keystone nova (user, service, etc) and dropped the nova DB and recreated
that, then recreated keystone nova with a new different password,
updated nova.conf with new password and still get the same error (notice
below now nova has the different password):


Permissions for a user are not affected by the removal of a database. 
You can even add permissions for a user to operate on a database that 
doesn't exist:


mysql> SHOW DATABASES;
++
| Database   |
++
| information_schema |
| mysql  |
| performance_schema |
| test   |
++
4 rows in set (0.03 sec)

mysql> GRANT ALL ON foo.* TO root@localhost;
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL ON test.* TO root@localhost;
Query OK, 0 rows affected (0.00 sec)

mysql> USE mysql
Database changed
mysql> SELECT * FROM db WHERE User = 'root' AND Db = 'foo'\G
*** 1. row ***
 Host: localhost
   Db: foo
 User: root
  Select_priv: Y
  Insert_priv: Y
  Update_priv: Y
  Delete_priv: Y
  Create_priv: Y
Drop_priv: Y
   Grant_priv: N
  References_priv: Y
   Index_priv: Y
   Alter_priv: Y
Create_tmp_table_priv: Y
 Lock_tables_priv: Y
 Create_view_priv: Y
   Show_view_priv: Y
  Create_routine_priv: Y
   Alter_routine_priv: Y
 Execute_priv: Y
   Event_priv: Y
 Trigger_priv: Y
1 row in set (0.00 sec)

Go figure :)

If you manually specify the host on the command line, do you still get 
in to the MySQL server?


i.e., if you do this on the command line, does it work?

mysql -unova -hlocalhost -p -Dnova

Best,
-jay


MariaDB [mysql]> SELECT user,password,host FROM user;
+--+---+---+
| user | password  | host  |
+--+---+---+
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | 127.0.0.1 |
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | ::1   |
| keystone | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
| keystone | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
| glance   | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
| glance   | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
| nova | *3DA97D7423D54524806BFF6A19D94F78EEF97338 | localhost |
| nova | *3DA97D7423D54524806BFF6A19D94F78EEF97338 | % |
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
+--+---+---+
10 rows in set (0.00 sec)


On Wed, Nov 26, 2014 at 2:26 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On 11/26/2014 02:21 PM, Amit Anand wrote:

Hi Jay - I believe so below is the part that is in the nova.conf

# The SQLAlchemy connection string used to connect to the
# bare-metal database (string value)
connection=mysql://nova:__PASSWORD@controller/nova

The PASSWORD is exactly the same what I have in the conf file
and what I
have in the nova.conf

Im doing this manually via the Juno instruction guide for CentOs 7.


try:

connection=mysql://nova:__PASSWORD@localhost/nova

Best,
-jay




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova DB sync Operational error

2014-11-26 Thread Jay Pipes

On 11/26/2014 02:21 PM, Amit Anand wrote:

Hi Jay - I believe so below is the part that is in the nova.conf

# The SQLAlchemy connection string used to connect to the
# bare-metal database (string value)
connection=mysql://nova:PASSWORD@controller/nova

The PASSWORD is exactly the same what I have in the conf file and what I
have in the nova.conf

Im doing this manually via the Juno instruction guide for CentOs 7.


try:

connection=mysql://nova:PASSWORD@localhost/nova

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nova DB sync Operational error

2014-11-26 Thread Jay Pipes

On 11/25/2014 05:10 PM, Amit Anand wrote:

Hi all,

Setup: 3 node, one controller, one compute and one network all separate
machines. Juno install.

Recreating my Juno install and this is the second time Ive gotten this
error whne running "su -s /bin/sh -c "nova-manage db sync" nova" ( I got
in in another install so I started over):

2014-11-25 16:16:51.707 3238 CRITICAL nova [-] OperationalError:
(OperationalError) (1045, "Access denied for user 'nova'@'localhost'
(using password: YES)") None None

I am confident that all is ok with my nova.conf (Ill be happy to send
it) - I think this maybe a bug within openstack as this is the second
time I have gotten the exact same error on a totally new install? Ive
checked MaraiDB and all looks good there:

MariaDB [mysql]> SELECT user,password,host FROM user;
+--+---+---+
| user | password  | host  |
+--+---+---+
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | 127.0.0.1 |
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | ::1   |
| keystone | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
| keystone | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
| glance   | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
| glance   | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
| nova | *7088873CEA983CB57491834389F9BB9369B9D756 | localhost |
| nova | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
+--+---+---+

Im using the exact same password for all my acccounts in this DEV
enviroment to keep it simple.

I can also connect just fine from command line:

[root@controller nova]# mysql -u nova -p nova
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 5.5.40-MariaDB MariaDB Server

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.

MariaDB [nova]>

So I have no idea what is wrong. Ive dropped the database and added it
back following the insturctions again via the install guide and same
error. I was hoping anyone had any pointers before I go ahead and report
this as a bug. Ive tried everything that Ask Openstack answers have
shown and no go. I am suprised that Im the only one getting this error
repeatedly...I see that root % is in there twice and no idea how to get
rid of it but since its using user nova anyways I dont see how that
would affect anyting. Please help!! Thanks!


Are you absolutely sure that the user and password that is in your 
nova.conf file (the [database] sql_connection string) is the same as 
what you are typing in on the command line?


How are you deploying your OpenStack environment? Using a configuration 
management tool or manually?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Second glance server

2014-11-03 Thread Jay Pipes

On 11/04/2014 04:34 AM, Chris wrote:

Hello,

We use OpenStack in one of our DC locations (location A). Now we want to
have compute nodes in other locations (location B).

In location B we want to have just compute nodes and an additional
glance server to prevent  image transfers from location A to B.

What is the best practice to have two glance servers in OpenStack and
what configuration changes needs to be done to let the compute nodes
just access the local glance server.


Just launch a new Glance API and registry server in location B, and 
point the nova.conf and cinder.conf's:


glance_api_servers=

configuration options to the local server.

Best,
-jay


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Randomly rebooting instances go in kernel panic

2014-10-30 Thread Jay Pipes

On 10/30/2014 08:41 AM, Fabrizio Soppelsa wrote:

Do you have suggestions on how to further troubleshoot such an issue?

[1]
Oct 27 13:08:46 lc-20 kernel: ctx4008000f: no IPv6 routers present
Oct 27 13:10:03 lc-20 kernel: tipc: Resetting link
<1.1.20:ethSw0-1.1.10:ethSw0>, peer not responding
Oct 27 13:10:03 lc-20 kernel: tipc: Lost link
<1.1.20:ethSw0-1.1.10:ethSw0> on network plane A
Oct 27 13:10:03 lc-20 kernel: tipc: Lost contact with <1.1.10>
Oct 27 13:10:14 lc-20 kernel: tipc: Established link


Hi! Seems to me that there is some communication issues on the ethSw0 
interface. Probably worth checking that networking is reliable over that 
link.


Additionally, you may want to check with a TIPC expert for whichever 
vendor is supplying your kernel (is this Windriver VxWorks?)


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] No one replying on tempest issue?Please share your experience

2014-09-29 Thread Jay Pipes

On 09/29/2014 06:51 AM, Nikesh Kumar Mahalka wrote:

How to get nova-compute logs in juno devstack?


If you set the following configuration option in your devstack localrc, 
all the log files from the different screen session'd services will end 
up in the $LOGDIR directory:


SCREEN_LOGDIR=$LOGDIR

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] [policy] hypervisor list

2014-09-16 Thread Jay Pipes

On 09/15/2014 05:02 AM, Abbass MAROUNI wrote:

Thanks Jay,

Does it require any admin rights to do a custom query on the Nova
database ? And if so do you know where to look for such a query ? Which
part of the nova code to include in the filter ?


I'm not talking about doing something via the public HTTP API... I'm 
talking about making a direct database query for our cinder information 
directly from your Nova scheduler filter class.


Best,
jay


Best Regards,


On 09/13/2014 02:00 PM, openstack-requ...@lists.openstack.org wrote:

OK, so not a normal user, but instead a service user (service being
cinder itself).

If this is a custom filter, I'd just go ahead and have the Cinder filter
to a custom query on the Nova database instead of using the
os-hypervisors API extension. I know that sounds like it's just adding
technical debt to your solution, but honestly, the os-hypervisors API
extension uses some questionable "queries" to construct its output.
Specifically, it queries all the Service objects, and then outputs the
first compute node matching the service_id in the compute_nodes table.
If there's >1 compute node on the host, the behaviour is entirely
undefined in the API extension.

Best,
-jay





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Scheduler try and error with compute nodes in different subnets/physnet's

2014-09-15 Thread Jay Pipes

On 09/16/2014 02:07 AM, Chris wrote:

Hello Jay,
As far as I understand each compute node sends the current available
resources to the management node.
This could also include the physnet, so the scheduler ignore the not
matching compute nodes. Or the compute nodes sends the physnet at the first
registration in the management node.
But to be honest I not have the deep insights to make a proper suggestion
here.


The physnet isn't really a resource though. I think the only thing I can 
think of is perhaps you should use host aggregates and group your 
like-physnetted compute nodes into different host aggregates. The 
problem I still see with that is that there is no way for the scheduler 
to understand which physnet an instance "belongs on"...


-jay


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, September 16, 2014 11:54
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Scheduler try and error with compute nodes in
different subnets/physnet's

On 09/15/2014 11:26 PM, Chris wrote:

Hello,

We have a OpenStack setup with a large number of compute nodes spread
in different subnets which are represented as different physnet's in

OpenStack.


When we start an instance we see that the scheduler choose a compute
node and tries to spawn the instance, then it sees its not in the
right physnet and choose a different compute node.


How would the scheduler know which compute node is "in the right physnet"?
In other words, when an instance is launched, how does the scheduler know
what is the "correct physnet" for that type of instance?


In our case this takes around 4 - 10 attempts. All this attempts
counts for the "scheduler_max_attempts" which default value is 3. We
increase this value to prevent errors, but it's still very inefficient
especially in a large environment.

Is there a way that the scheduler knows the physnet/subnet position of
the compute nodes before the instance tries to spawn?




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Scheduler try and error with compute nodes in different subnets/physnet's

2014-09-15 Thread Jay Pipes

On 09/15/2014 11:26 PM, Chris wrote:

Hello,

We have a OpenStack setup with a large number of compute nodes spread in
different subnets which are represented as different physnet’s in OpenStack.

When we start an instance we see that the scheduler choose a compute
node and tries to spawn the instance, then it sees its not in the right
physnet and choose a different compute node.


How would the scheduler know which compute node is "in the right 
physnet"? In other words, when an instance is launched, how does the 
scheduler know what is the "correct physnet" for that type of instance?



In our case this takes around 4 – 10 attempts. All this attempts counts
for the “scheduler_max_attempts” which default value is 3. We increase
this value to prevent errors, but it’s still very inefficient especially
in a large environment.

Is there a way that the scheduler knows the physnet/subnet position of
the compute nodes before the instance tries to spawn?




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


  1   2   >