Re: [Openstack] Juno Summit: VMware + OpenStack: Accelerating OpenStack in the Enterprise

2014-05-16 Thread Eric Brown
https://www.youtube.com/watch?v=3WqFTPgNRGg


On May 16, 2014, at 3:39 PM, Michael Gale  wrote:

> Hey,
> 
> I am looking for the video to this session, I attended it on Wednesday 
> and it gave a great look at how VMware and integrate with OpenStack:
> 
> http://openstacksummitmay2014atlanta.sched.org/event/2af26570126221f64af8e0ab891b9a33#.U3ZohXWx1dk
> 
> VMware + OpenStack: Accelerating OpenStack in the Enterprise
> 
> This type of transition can really help my company adopt OpenStack.
> 
> Thanks
> Michael
> ___
> Mailing list: 
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=2CQc966BQ6s3Cdd6nQ79uvWP17nF9g%2FX4m3XppGg1xQ%3D%0A&m=ShJBsRkQWTMRx%2FijU5eq5q0pglspm%2BTFonO%2BcLPWm8o%3D%0A&s=856b8357aa836f22d57b47b2c0e13772bba3e924ea3931aeae873a0634b8e15c
> Post to : openstack@lists.openstack.org
> Unsubscribe : 
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=2CQc966BQ6s3Cdd6nQ79uvWP17nF9g%2FX4m3XppGg1xQ%3D%0A&m=ShJBsRkQWTMRx%2FijU5eq5q0pglspm%2BTFonO%2BcLPWm8o%3D%0A&s=856b8357aa836f22d57b47b2c0e13772bba3e924ea3931aeae873a0634b8e15c

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Is there any way to get the user's activities?

2014-05-16 Thread LIU Yulong
Hi all,


Is there any way to get the user's activities?


Ceilometer is a metering service, 
Can it get the user's activities?


Such as the table below,


Action|   Status
 |  Time
Create instance   |   Successful   |
  2014-5-16 9:00am 
Create instance   |   error   | 
 2014-5-16 9:10am  
delete  instance   |   Successful   |   
   2014-5-16 9:20am
upload  image |   Successful   |
  2014-5-16 9:30am
delete   image |   Successful   |   
   2014-5-16 9:30am


...




Best Regards,
LIU Yulong___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] rdo havana to icehouse: instances stuck in 'resized or migrated'

2014-05-16 Thread Dimitri Maziuk
Hi all.

Upgrading centos 6/rdo from havana to icehouse: I've updated all
services on the controller as per the fine manual, then updated one of
the 3 compute nodes.

Now I'm trying to migrate the (shut off) instances from havana compute
node to the icehouse compute node and they get stuck in
"resize_migrated". On the first one I "reset-state --active", renamed
/var/lib/nova/instances/_resize and was able to issue a 'hard
reboot' from the dashboard.

Now I have the 2nd one stuck in the same state. I restarted compute with
debug on on the target (icehouse) compute node, there's nothing
interesting there so the problem is somewhere else. In fact, it looks
like all it is is something isn't popping up the "confirm resize" button
in the dashboard -- /var/lib/nova/instances is nfs-mounted so there's no
actual data resizing/migration necessary.

I really don't want to go through the reset-state/rename dir dance for
every instance I have.

Any suggestions? Anyone done the havana-to-icehouse dance with 2 compute
nodes?

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Storage Multi Tenancy

2014-05-16 Thread Nirlay Kundu



This can be done the following way : Since Cinder scheduler allows you to set 
multiple filters, you
could potentially use one of the filters, say ‘availability zone’ for this.
Essentially create a different availability zone for each storage pool – one
for ceph cluster, one for tenants own pool, etc. and specify it during nova
boot to ensure the appropriate pool/availability zone is selected.  There are 
storage based options for multi-tenancy that built natively into
storage arrays like HP's 3Par. You can try that. 
Hope this helps.
Nirlay

 Date: Fri, 16 May 2014 16:14:34 +0200
From: jer...@mediacaster.nl
To: openstack@lists.openstack.org
Subject: [Openstack] Storage Multi Tenancy

Hello,
Currently I am integrating my ceph cluster into Openstack by using Ceph’s RBD. 
I’d like to store my KVM virtual machines on pools that I have made on the ceph 
cluster.I would like to achieve to have multiple storage solutions for multiple 
tenants. Currently when I launch an instance the instance will be set on the 
Ceph pool that has been defined in the cinder.conf file of my Openstack 
controller node. If you set up an multi storage backend for cinder then the 
scheduler will determine which storage backend will be used without looking at 
the tenant. 
What I would like to happen is that the instance/VM that’s being launched by a 
specific tenant should have two choices; either choose for a shared Ceph Pool 
or have their own pool. Another option might even be a tenant having his own 
ceph cluster. When the instance is being launched on either shared pool, 
dedicated pool or even another cluster, I would also like the extra volumes 
that are being created to have the same option. 
Data needs to be isolated from another tenants and users and therefore choosing 
other pools/clusters would be nice. Is this goal achievable or is it 
impossible. If it’s achievable could I please have some assistance in doing so. 
Has anyone ever done this before.
I would like thank you in advance for reading this lengthy e-mail. If there’s 
anything that is unclear, please feel free to ask.
Best Regards,
Jeroen van Leur
-- InfitialisSent with Airmail
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
  ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Juno Summit: VMware + OpenStack: Accelerating OpenStack in the Enterprise

2014-05-16 Thread Michael Gale
Hey,

I am looking for the video to this session, I attended it on Wednesday
and it gave a great look at how VMware and integrate with OpenStack:

http://openstacksummitmay2014atlanta.sched.org/event/2af26570126221f64af8e0ab891b9a33#.U3ZohXWx1dk

VMware + OpenStack: Accelerating OpenStack in the Enterprise

This type of transition can really help my company adopt OpenStack.

Thanks
Michael
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using Nova client with SSH SOCKS proxy

2014-05-16 Thread Adrian Smith
Thanks guys. I got it working using proxychains (tsocks isn't readily
available under brew on OSX).


On 16 May 2014 17:24, Andriy Yurchuk  wrote:
> Hi!
>
> SOCKS proxy is not an HTTP proxy so setting HTTP_PROXY environment variable 
> won't work. Rather try something like tsocks: 
> http://tsocks.sourceforge.net/index.php
>
> On May 16, 2014, at 7:10 PM, Adrian Smith  wrote:
>
>> Yes,
>>
>> $ grep localhost /etc/hosts
>> # localhost is used to configure the loopback interface
>> 127.0.0.1localhost
>> ::1 localhost
>> fe80::1%lo0localhost
>>
>>
>>
>> On 16 May 2014 17:05, Clark, Robert Graham  wrote:
>>> Is localhost listed in your /etc/hosts ?
>>>
>>> Maybe try with HTTP_PROXY=http://127.0.0.1:13392 - just in case.
>>>
>>> On 16/05/2014 11:41, "Adrian Smith"  wrote:
>>>
 To access my controller I need to go through a intermediary box.

 I've created a local SOCKS proxy by ssh'ing to this intermediary with
 the parameters -D 13392.

 I then set the environment variable,
 export HTTP_PROXY=http://localhost:13392

 But using "nova list" just gives an error,
 $ nova list
 ERROR: HTTPConnectionPool(host='localhost', port=13392): Max retries
 exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by >>> 'httplib.BadStatusLine'>: '')

 Should this work? Am I doing something wrong?

 Adrian

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Speaking engagements in US/EU?

2014-05-16 Thread Adrien Cunin
Le 09/05/2014 15:52, Adam Lawson a écrit :
> Hey folks,
> 
> Curious if there is a list being maintained somewhere where speakers are
> needed or can submit talks? I want to keep up on my presentation skills
> and wish there was a centralized list of the notable
> conferences/opportunities requiring cloud-related speaking
> opportunities. It's hard to keep searching for and tracking deadlines
> for them all individually.
> 
> Anyone have such a list they're maintaining for 2014?

Hi,

I suggest you subscribe to the community mailing-list. People announce
local OpenStack events there, and call for papers for related events are
also relayed.

Adrien



signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using Nova client with SSH SOCKS proxy

2014-05-16 Thread Adrian Smith
I tried with 127.0.0.1 but the same error,

$ nova list
ERROR: HTTPConnectionPool(host='127.0.0.1', port=13392): Max retries
exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by : '')

On 16 May 2014 17:10, Adrian Smith  wrote:
> Yes,
>
> $ grep localhost /etc/hosts
> # localhost is used to configure the loopback interface
> 127.0.0.1localhost
> ::1 localhost
> fe80::1%lo0localhost
>
>
>
> On 16 May 2014 17:05, Clark, Robert Graham  wrote:
>> Is localhost listed in your /etc/hosts ?
>>
>> Maybe try with HTTP_PROXY=http://127.0.0.1:13392 - just in case.
>>
>> On 16/05/2014 11:41, "Adrian Smith"  wrote:
>>
>>>To access my controller I need to go through a intermediary box.
>>>
>>>I've created a local SOCKS proxy by ssh'ing to this intermediary with
>>>the parameters -D 13392.
>>>
>>>I then set the environment variable,
>>> export HTTP_PROXY=http://localhost:13392
>>>
>>>But using "nova list" just gives an error,
>>>$ nova list
>>>ERROR: HTTPConnectionPool(host='localhost', port=13392): Max retries
>>>exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by >>'httplib.BadStatusLine'>: '')
>>>
>>>Should this work? Am I doing something wrong?
>>>
>>>Adrian
>>>
>>>___
>>>Mailing list:
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>Post to : openstack@lists.openstack.org
>>>Unsubscribe :
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using Nova client with SSH SOCKS proxy

2014-05-16 Thread Dean Troyer
On Fri, May 16, 2014 at 10:41 AM, Adrian Smith  wrote:

> To access my controller I need to go through a intermediary box.
>
> I've created a local SOCKS proxy by ssh'ing to this intermediary with
> the parameters -D 13392.
>
> I then set the environment variable,
>  export HTTP_PROXY=http://localhost:13392
>
> But using "nova list" just gives an error,
> $ nova list
> ERROR: HTTPConnectionPool(host='localhost', port=13392): Max retries
> exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by  'httplib.BadStatusLine'>: '')
>
> Should this work? Am I doing something wrong?
>

The nova client lib uses requests/urllib3 for HTTP, they do not support
SOCKS proxies out of the box.  There have been some forks/patches to add
that to requests or urllib3, we have not tested those.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using Nova client with SSH SOCKS proxy

2014-05-16 Thread Adrian Smith
Yes,

$ grep localhost /etc/hosts
# localhost is used to configure the loopback interface
127.0.0.1localhost
::1 localhost
fe80::1%lo0localhost



On 16 May 2014 17:05, Clark, Robert Graham  wrote:
> Is localhost listed in your /etc/hosts ?
>
> Maybe try with HTTP_PROXY=http://127.0.0.1:13392 - just in case.
>
> On 16/05/2014 11:41, "Adrian Smith"  wrote:
>
>>To access my controller I need to go through a intermediary box.
>>
>>I've created a local SOCKS proxy by ssh'ing to this intermediary with
>>the parameters -D 13392.
>>
>>I then set the environment variable,
>> export HTTP_PROXY=http://localhost:13392
>>
>>But using "nova list" just gives an error,
>>$ nova list
>>ERROR: HTTPConnectionPool(host='localhost', port=13392): Max retries
>>exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by >'httplib.BadStatusLine'>: '')
>>
>>Should this work? Am I doing something wrong?
>>
>>Adrian
>>
>>___
>>Mailing list:
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>Post to : openstack@lists.openstack.org
>>Unsubscribe :
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using Nova client with SSH SOCKS proxy

2014-05-16 Thread Andriy Yurchuk
Hi!

SOCKS proxy is not an HTTP proxy so setting HTTP_PROXY environment variable 
won't work. Rather try something like tsocks: 
http://tsocks.sourceforge.net/index.php

On May 16, 2014, at 7:10 PM, Adrian Smith  wrote:

> Yes,
> 
> $ grep localhost /etc/hosts
> # localhost is used to configure the loopback interface
> 127.0.0.1localhost
> ::1 localhost
> fe80::1%lo0localhost
> 
> 
> 
> On 16 May 2014 17:05, Clark, Robert Graham  wrote:
>> Is localhost listed in your /etc/hosts ?
>> 
>> Maybe try with HTTP_PROXY=http://127.0.0.1:13392 - just in case.
>> 
>> On 16/05/2014 11:41, "Adrian Smith"  wrote:
>> 
>>> To access my controller I need to go through a intermediary box.
>>> 
>>> I've created a local SOCKS proxy by ssh'ing to this intermediary with
>>> the parameters -D 13392.
>>> 
>>> I then set the environment variable,
>>> export HTTP_PROXY=http://localhost:13392
>>> 
>>> But using "nova list" just gives an error,
>>> $ nova list
>>> ERROR: HTTPConnectionPool(host='localhost', port=13392): Max retries
>>> exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by >> 'httplib.BadStatusLine'>: '')
>>> 
>>> Should this work? Am I doing something wrong?
>>> 
>>> Adrian
>>> 
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using Nova client with SSH SOCKS proxy

2014-05-16 Thread Clark, Robert Graham
Is localhost listed in your /etc/hosts ?

Maybe try with HTTP_PROXY=http://127.0.0.1:13392 - just in case.

On 16/05/2014 11:41, "Adrian Smith"  wrote:

>To access my controller I need to go through a intermediary box.
>
>I've created a local SOCKS proxy by ssh'ing to this intermediary with
>the parameters -D 13392.
>
>I then set the environment variable,
> export HTTP_PROXY=http://localhost:13392
>
>But using "nova list" just gives an error,
>$ nova list
>ERROR: HTTPConnectionPool(host='localhost', port=13392): Max retries
>exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by 'httplib.BadStatusLine'>: '')
>
>Should this work? Am I doing something wrong?
>
>Adrian
>
>___
>Mailing list: 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to : openstack@lists.openstack.org
>Unsubscribe : 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Using Nova client with SSH SOCKS proxy

2014-05-16 Thread Adrian Smith
To access my controller I need to go through a intermediary box.

I've created a local SOCKS proxy by ssh'ing to this intermediary with
the parameters -D 13392.

I then set the environment variable,
 export HTTP_PROXY=http://localhost:13392

But using "nova list" just gives an error,
$ nova list
ERROR: HTTPConnectionPool(host='localhost', port=13392): Max retries
exceeded with url: http://x.x.x.x:5000/v2.0/tokens (Caused by : '')

Should this work? Am I doing something wrong?

Adrian

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova compute repeating logs

2014-05-16 Thread Dimitri Maziuk

On 5/16/2014 6:00 AM, sonia verma wrote:

Hi

I'm trying to boot VM from my controller node(openstack dashboard) onto
compute node but it is stucking at spawning state.
I'm able to see the VM interface onto the compute but the status is
spawning even after 10-15 minutes.


Check /var/lib/nova/instances/ on the compute; is is writing a 
multi-gigabyte "ephemeral" disk image at ~2GB/day?


Dima


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Storage Multi Tenancy

2014-05-16 Thread jeroen
Hello,

Currently I am integrating my ceph cluster into Openstack by using Ceph’s RBD. 
I’d like to store my KVM virtual machines on pools that I have made on the ceph 
cluster.
I would like to achieve to have multiple storage solutions for multiple 
tenants. Currently when I launch an instance the instance will be set on the 
Ceph pool that has been defined in the cinder.conf file of my Openstack 
controller node. If you set up an multi storage backend for cinder then the 
scheduler will determine which storage backend will be used without looking at 
the tenant. 

What I would like to happen is that the instance/VM that’s being launched by a 
specific tenant should have two choices; either choose for a shared Ceph Pool 
or have their own pool. Another option might even be a tenant having his own 
ceph cluster. When the instance is being launched on either shared pool, 
dedicated pool or even another cluster, I would also like the extra volumes 
that are being created to have the same option. 

Data needs to be isolated from another tenants and users and therefore choosing 
other pools/clusters would be nice. 
Is this goal achievable or is it impossible. If it’s achievable could I please 
have some assistance in doing so. Has anyone ever done this before.

I would like thank you in advance for reading this lengthy e-mail. If there’s 
anything that is unclear, please feel free to ask.

Best Regards,

Jeroen van Leur

-- 
Infitialis
Sent with Airmail___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova-compute and cinder-scheduler HA

2014-05-16 Thread Jay Pipes

On 05/14/2014 02:49 PM, Сергей Мотовиловец wrote:

Hello everyone!


Hi Motovilovets :) Comments and questions for you inline...


I'm facing some troubles with nova and cinder here.

I have 2 control nodes (active/active) in my testing environment with
Percona XtraDB cluster (Galera+xtrabackup) + garbd on a separate node
(to avoid split-brain) Â + OpenStack Icehouse, latest from Ubuntu 14.04
main repo.

The problem is horizontal scalability of nova-conductor and
cinder-scheduler services, seems like all active instances of these
services are trying to execute same MySQL queries they get from
Rabbit, which leads to numerous deadlocks in set-up with Galera.Â


Are you using RabbitMQ in clustered mode? Also, how are you doing your 
load balancing? Do you use HAProxy or some appliance? Do you have sticky 
sessions enabled for your load balancing?



In case when multiple nova-conductor services are running (and using
MySQL instances on corresponding control nodes) it appears as "Deadlock
found when trying to get lock; try restarting transaction" in log.
With cinder-scheduler it leads to "InvalidBDM: Block Device Mapping is
Invalid."


So, it's not actually a deadlock that is occurring... unless I'm 
mistaken (I've asked a Percona engineer to take a look at this thread to 
double-check me), the error about "Deadlock found..." is actually *not* 
a deadlock. It's just that Galera uses the same InnoDB error code as a 
normal deadlock to indicate that the WSREP certification process has 
timed out between the cluster nodes. Would you mind pastebin'ing your 
wsrep.cnf and my.cnf files for us to take a look at? I presume that you 
do not have much latency between the cluster nodes (i.e. they are not 
over a WAN)... let me know if that is not the case.


It would also be helpful to see your rabbit and load balancer configs if 
you can pastebin those, too.



Is there any possible way to make multiple instances of these services
running simultaneously and not duplicating queries?Â


Yes, it most certainly is. At AT&T, we ran Galera clusters of much 
bigger size with absolutely no problems due to this cert timeout problem 
that manifests itself as a deadlock, so I know it's definitely possible 
to have a clean, performant, multi-writer Galera solution for OpenStack. :)


Best,
-jay


(I don't really like the idea of handling this with Heartbeat+Pacemaker
or other similar stuff, mostly because I'm thinking about equal load
distribution across control nodes, but in this case it seems like it has
an opposite effect, multiplying load on MySQL)

Another thing that is extremely annoying: if instance stuck in ERROR
state because of deadlock during its termination - it is impossible to
terminate instance anymore in Horizon, only via nova-api with
reset-state. How can this be handled?

I'd really appreciate any help/advises/thoughts regarding these problems.


Best regards,
Motovilovets Sergey
Software Operation Engineer


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] booting VM with customized kernel and rootfs image

2014-05-16 Thread sonia verma
 Hi


I'm getting following repeated nova-compute logs when trying to boot VM ..


05-16 05:34:19.503 26935 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager._instance_usage_audit run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager.update_available_resource run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
"compute_resources" lock
/opt/stack/nova/nova/openstack/common/lockutils.py:166^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
/ lock "update_available_resource" inner
/opt/stack/nova/nova/openstack/common/lockutils.py:245^M 2014-05-16
05:34:19.505 26935 AUDIT nova.compute.resource_tracker [-] Auditing locally
available compute resources^M 2014-05-16 05:34:19.506 26935 DEBUG
nova.virt.libvirt.driver [-] Updating host stats update_status
/opt/stack/nova/nova/virt/libvirt/driver.py:4865^M 2014-05-16 05:34:19.566
26935 DEBUG nova.openstack.common.processutils [-] Running cmd
(subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.612 26935 DEBUG nova.openstack.common.processutils [-]
Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.703 26935 DEBUG nova.compute.resource_tracker [-]
Hypervisor: free ram (MB): 5565 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:388^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
disk (GB): 95 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:389^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
VCPUs: 24 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:394^M 2014-05-16
05:34:19.706 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor:
assignable PCI devices: [] _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:401^M 2014-05-16
05:34:19.708 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
7435553a261b4f3eb61f985017441333 multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is
f2dd9f9fc517406bbe82366085de5523. _add_unique_id
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341^M 2014-05-16
05:34:19.716 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.717 26935 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
965b77a6b9da47c884bd22a2d47de23c multicall
/opt/stack/nova/nova/openstack/com

Please help regarding this.

Thanks



On Tue, May 13, 2014 at 5:51 PM, Parthipan, Loganathan wrote:

>  You can upload your custom kernel/rootdisk pair to glance and use the
> rootdisk uuid to boot an instance.
>
>
>
> http://docs.openstack.org/user-guide/content/cli_manage_images.html
>
>
>
>
>
> *From:* sonia verma [mailto:soniaverma9...@gmail.com]
> *Sent:* 13 May 2014 06:33
> *To:* OpenStack Development Mailing List (not for usage questions);
> openstack@lists.openstack.org
> *Subject:* [openstack-dev] booting VM with customized kernel and rootfs
> image
>
>
>
> Hi all
>
> I have installed openstack using devstack.I'm able able to boot VM from
> the opebstack dashboard onto the compute node.
>
> Now i need to boot VM from the openstack dashboard(controller node) onto
> compute node using customized kernel imae and rootfs.
>
> Therefore my question is whether can we boot VM from controller node onto
> compute node using the customized kernel and rootfs image.
>
> Please help regarding this.
>
>
>  Thanks
>
> Sonia
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstac

Re: [Openstack] pacemaker would be wrong when both node have same hostname

2014-05-16 Thread walterxj







Hi Marica:    I have test your RA script for 2 days. After so many times of 
attempts it finally works well :)    I changed serveral 
settings(resource-stickiness,RA script,neutron l3-agent setting etc.) in my 
envirment ,so I can't tell which is the key point of my changes.    I have 
attach my RA script here ,hope to help anybody else with same problems,and I 
have commented for all my changes.    In my RA script I had made a change to 
yours "restore old hostname" section:    The origin section is :hostname 
Network01    I changed it to : hostname $(cat /etc/sysconfig/network | grep 
HOSTNAME | awk -F "=" '{print $2}')     So both nodes can use one RA script.
    Aleita's method is good because when l3-agent start on whicherver node,the 
l3-agent id which hosting-router is the same because the node's hostname is the 
same,    in this example script,it's network-controller. We can use neutron 
l3-agent-list-hosting-router $ext-router-id to check it.It will always be like: 
   
+--+++---+
     | id   | host   | 
admin_state_up | alive |

    
+--+++---+

    | -xx-x-x-| network-controller | True   
| :-)   |

    
+--+++---+
    So when one node goes down,other node's l3-agent can take the same l3-agent 
id as own.
    And my thought before your mail was: remove the l3-agent from hosting 
router and add the backup l3-agent to hosting router,something like:   
#=
    down_l3_agent_ID=$(/usr/bin/neutron agent-list | grep 'L3 agent' | awk 
'$7!="'`hostname`'" {print $2}') 

    back_l3_agent_ID=$(/usr/bin/neutron agent-list | grep 'L3 agent' | awk 
'$7=="'`hostname`'" {print $2}')
    for r in $(/usr/bin/neutron router-list-on-l3-agent $down_l3_agent_ID | awk 
'NR>3 && NF>1{print $2}');

        do /usr/bin/neutron l3-agent-router-remove $down_l3_agent_ID $r && 
/usr/bin/neutron l3-agent-router-add $back_l3_agent_ID $r;     done   
#=
    I think it will work as well,but your method is better I think :)     So 
thank you very much!
    btw: I have change OCF_RESKEY_agent_config_default to 
OCF_RESKEY_plugin_config_default, otherwise we can't set the pacemaker as 
high-availability-guide like :    primitive p_neutron-l3-agent 
ocf:openstack:neutron-agent-l3 \     params config="/etc/neutron/neutron.conf" \

    plugin_config="/etc/neutron/l3_agent.ini" \

    op monitor interval="30s" timeout="30s" 
    these changes are based on : 
https://bugs.launchpad.net/openstack-manuals/+bug/1252131 
 




Walter Xu
 From: walterxjDate: 2014-05-15 09:51To: Marica AntonacciSubject: Re: Re: 
[Openstack] pacemaker would be wrong when both node have same hostname
Hi Marica,
   When I use "crm node standby",it seems work,but I think by that way,the 
virtual router still resident on the former node because when I poweroff this 
node, the VM instance can not access the external net.
   I'll test again carefully,After testing I'll feed back to you.
   Thank you again for your help. walterxj From: Marica AntonacciDate: 
2014-05-14 22:21To: xu WalterSubject: Re: [Openstack] pacemaker would be wrong 
when both node have same hostnameHi Walter,
we are using it in our production havana environment. We have tested it both 
using "crm node standby” and “crm resource migrate g_network ” and 
turning off the node network interfaces and shutting down the node, etc..
Have you modified our script using the correct hostnames for the two different 
nodes?
Cheers,Marica  
Il giorno 14/mag/2014, alle ore 16:08, xu Walter  ha 
scritto:

Hi Marica:    Thanks for your script,but it seems not work for me.I want to 
know how did you test it? Just use "crm node standby" or shutdown the node 
physically?


2014-05-14 19:49 GMT+08:00 Marica Antonacci :

Hi,
in attachment you can find our modified resource agent…we have noticed that the 
network namespaces (router and dhcp) are automatically re-created on the new 
node when the resource manager migrates the network controller on the other 
physical node (we have grouped all the services related to the network node).

Please, note that the attached script contains also other patches wrt to the RA 
available at 
https://raw.githubusercontent.com/madkiss/openstack-resource-agents/master/ocf/neutron-agent-l3
 because we found some issues with the resource agent parameters and the port 
used to check the established connection with the server; moreover we have 
added the start/stop operations for the neutron-plugin-openvswitch-agent since 
there is no available RA at the moment for this service.  

Cheers, Marica

[Openstack] nova compute repeating logs

2014-05-16 Thread sonia verma
Hi

I'm trying to boot VM from my controller node(openstack dashboard) onto
compute node but it is stucking at spawning state.
I'm able to see the VM interface onto the compute but the status is
spawning even after 10-15 minutes.

Below are the nova schedular logs..

ova/openstack/common/loopingcall.py:130^M 2014-05-16 16:02:16.581 13421
DEBUG nova.openstack.common.periodic_task [-] Running periodic task
SchedulerManager._expire_reservations run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:02:16.588 13421 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:02:16.589 13421 DEBUG nova.openstack.common.loopingcall [-] Dynamic
looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130^M 2014-05-16
16:03:16.593 13421 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task SchedulerManager._expire_reservations run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:03:16.600 13421 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:178^M 2014-05-16
16:03:16.601 13421 DEBUG nova.openstack.common.loopingcall [-] Dynamic
looping call sleeping for 60.00 seconds _inner
/opt/stack/nova/nova/openstack/common/loopingcall.py:130^M

Also my nova-compute logs at the compute node are repeating
continuesly.Below are the logs.

-05-16 05:34:19.503 26935 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager._instance_usage_audit run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager.update_available_resource run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
"compute_resources" lock
/opt/stack/nova/nova/openstack/common/lockutils.py:166^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
/ lock "update_available_resource" inner
/opt/stack/nova/nova/openstack/common/lockutils.py:245^M 2014-05-16
05:34:19.505 26935 AUDIT nova.compute.resource_tracker [-] Auditing locally
available compute resources^M 2014-05-16 05:34:19.506 26935 DEBUG
nova.virt.libvirt.driver [-] Updating host stats update_status
/opt/stack/nova/nova/virt/libvirt/driver.py:4865^M 2014-05-16 05:34:19.566
26935 DEBUG nova.openstack.common.processutils [-] Running cmd
(subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.612 26935 DEBUG nova.openstack.common.processutils [-]
Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.703 26935 DEBUG nova.compute.resource_tracker [-]
Hypervisor: free ram (MB): 5565 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:388^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
disk (GB): 95 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:389^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
VCPUs: 24 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:394^M 2014-05-16
05:34:19.706 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor:
assignable PCI devices: [] _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:401^M 2014-05-16
05:34:19.708 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
7435553a261b4f3eb61f985017441333 multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is
f2dd9f9fc517406bbe82366085de5523. _add_unique_id
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341^M 2014-05-16
05:34:19.716 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.717 26935 DEBUG nova.openstack.common.

[Openstack] Fwd: About Standalone Openstack ISO Havan/Icehouse Installer

2014-05-16 Thread Mayur Patil
Hi All,

  I want to know is there any Standalone ISO of Havana /Icehouse.

  There are procedures given of Chef/Puppet DevOps tools for Openstack
installation

  by Rackspace. Also Mirantis has provided ISO but I am unable to see GUI
of the

  same. I have limited Bandwidth Plan.

  So for me such ISO will be a boon if anything weird happens so I would be
able

  reinstall quickly. I hope there will be one...

  Seeking for guidance,

  Thanks!!
-- 

*Cheers,Mayur* S. Patil,
Pune.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack