Re: [Openstack-operators] Openstack and mysql galera with haproxy

2014-09-23 Thread Emrah Aslan
Hi all, 

This is a common problem SQL injections . Don’t forget to clear the cache after 
select *.* injections , eventhough you clear cache , not quite sure working 
properly after the process. As well as  Esp. Select *.* injections waits too 
long. There is still a black gap for those who aint got the high end server 
such  Cisco C240 M4  series. I am confident with C240M4 new generation. And we 
are testing the Cisco Mini series B200- VP.  It looks great for App/Web 
developers. 

I am interested in creating a group for those who interested in "vGPU"  
(Application virtualization) such Autocad -Autodesk and Rendering programs. 
Please contact me if interested in vGPU. 

Kind Regards

Emrah ASLAN
Cisco/Citrix System Engineer

-Original Message-
From: Sławek Kapłoński [mailto:sla...@kaplonski.pl] 
Sent: Monday, September 22, 2014 11:02 PM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Openstack and mysql galera with haproxy

Hello,

Answears below

---
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia poniedziałek, 22 września 2014 13:41:51 Jay Pipes pisze:
> Hi Peter, Sławek, answers inline...
> 
> On 09/22/2014 08:12 AM, Peter Boros wrote:
> > Hi,
> > 
> > StaleDataError is not given by MySQL, but rather SQLAlchemy. After a 
> > quick look, it seems like SQLAlchemy gets this, if the update 
> > updated different number of rows then it expected. I am not sure 
> > what is the expectation based on, perhaps soembody can chime in and 
> > we can put this together. What is the transaction isolation level 
> > you are running on?
> 
> The transaction isolation level is REPEATABLE_READ, unless Sławek has 
> changed the defaults (unlikely).
For sure I didn't change it
> 
> > For the timeout setting in neutron: that's a good way to approach it 
> > too, you can even be more agressive and set it to a few seconds. In 
> > MySQL making connections is very cheap (at least compared to other 
> > databases), an idle timeout of a few seconds for a connection is 
> > typical.
> > 
> > On Mon, Sep 22, 2014 at 12:35 PM, Sławek Kapłoński 
> > 
wrote:
> >> Hello,
> >> 
> >> Thanks for Your explanations. I thought so and now I decrease 
> >> "idle_connection_timeout" in neutron and nova. Now when master 
> >> server is back to cluster than in less than one minute all 
> >> conections are again made to this master node becuase old 
> >> connections which was made to backup node are closed. So for now it 
> >> looks almost perfect but when I now testing cluster (with master 
> >> node active and all connections established to this node) in neutron I 
> >> still sometimes see errors like:
> >> StaleDataError: UPDATE statement on table 'ports' expected to 
> >> update 1 row(s); 0 were matched.
> >> 
> >> and also today I found errors like:
> >> 2014-09-22 11:38:05.715 11474 INFO sqlalchemy.engine.base.Engine 
> >> [-] ROLLBACK 2014-09-22 11:38:05.784 11474 ERROR 
> >> neutron.openstack.common.db.sqlalchemy.session [-] DB exception wrapped.
> >> 2014-09-22 11:38:05.784 11474 TRACE 
> >> neutron.openstack.common.db.sqlalchemy.session Traceback (most 
> >> recent call
> >> last):
> >> 2014-09-22 11:38:05.784 11474 TRACE
> >> neutron.openstack.common.db.sqlalchemy.session   File
> >> "/usr/lib/python2.7/dist-
> >> packages/neutron/openstack/common/db/sqlalchemy/session.py", line 
> >> 524, in _wrap
> >> 2014-09-22 11:38:05.784 11474 TRACE
> >> neutron.openstack.common.db.sqlalchemy.session return f(*args,
> >> **kwargs) 2014-09-22 11:38:05.784 11474 TRACE
> 
>  From looking up the code, it looks like you are using Havana [1]. The 
> code in the master branch of Neutron now uses oslo.db, not 
> neutron.openstack.common.db, so this issue may have been resolved in 
> later versions of Neutron.
Yes, I'm using havana and I have now no possibility to upgrade it fast to 
icehouse (about master branch I even don't want to think :)). Do You want to 
tell me that this problem will be existing in havana and this can't be fixed in 
that release?
> 
> [1]
> https://github.com/openstack/neutron/blob/stable/havana/neutron/openst
> ack/co
> mmon/db/sqlalchemy/session.py#L524
> >> neutron.openstack.common.db.sqlalchemy.session   File
> >> "/usr/lib/python2.7/dist-
> >> packages/neutron/openstack/common/db/sqlalchemy/session.py", line 
> >> 718, in flush 2014-09-22 11:38:05.784 11474 TRACE
> >> neutron.openstack.common.db.sqlalchemy.session return super(Session,
> >> self).flush(*args, **kwargs)
> >> 2014-09-22 11:38:05.784 11474 TRACE
> >> neutron.openstack.common.db.sqlalchemy.session   File
> >> "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
> >> 1818, in flush
> >> 2014-09-22 11:38:05.784 11474 TRACE
> >> neutron.openstack.common.db.sqlalchemy.session self._flush(objects)
> >> 2014-09-22 11:38:05.784 11474 TRACE
> >> neutron.openstack.common.db.sqlalchemy.session   File
> >> "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
> >> 1936, in _flush
> >> 

Re: [Openstack-operators] Openstack and mysql galera with haproxy

2014-09-23 Thread Sławek Kapłoński
Hello,


Dnia poniedziałek, 22 września 2014 22:02:26 Sławek Kapłoński pisze:
> Hello,
> 
> Answears below
> 
> ---
> Best regards
> Sławek Kapłoński
> sla...@kaplonski.pl
> 
> Dnia poniedziałek, 22 września 2014 13:41:51 Jay Pipes pisze:
> > Hi Peter, Sławek, answers inline...
> > 
> > On 09/22/2014 08:12 AM, Peter Boros wrote:
> > > Hi,
> > > 
> > > StaleDataError is not given by MySQL, but rather SQLAlchemy. After a
> > > quick look, it seems like SQLAlchemy gets this, if the update updated
> > > different number of rows then it expected. I am not sure what is the
> > > expectation based on, perhaps soembody can chime in and we can put
> > > this together. What is the transaction isolation level you are running
> > > on?
> > 
> > The transaction isolation level is REPEATABLE_READ, unless Sławek has
> > changed the defaults (unlikely).
> 
> For sure I didn't change it
> 
> > > For the timeout setting in neutron: that's a good way to approach it
> > > too, you can even be more agressive and set it to a few seconds. In
> > > MySQL making connections is very cheap (at least compared to other
> > > databases), an idle timeout of a few seconds for a connection is
> > > typical.
> > > 
> > > On Mon, Sep 22, 2014 at 12:35 PM, Sławek Kapłoński 
> 
> wrote:
> > >> Hello,
> > >> 
> > >> Thanks for Your explanations. I thought so and now I decrease
> > >> "idle_connection_timeout" in neutron and nova. Now when master server
> > >> is
> > >> back to cluster than in less than one minute all conections are again
> > >> made to this master node becuase old connections which was made to
> > >> backup node are closed. So for now it looks almost perfect but when I
> > >> now testing cluster (with master node active and all connections
> > >> established to this node) in neutron I still sometimes see errors like:
> > >> StaleDataError: UPDATE statement on table 'ports' expected to update 1
> > >> row(s); 0 were matched.
> > >> 
> > >> and also today I found errors like:
> > >> 2014-09-22 11:38:05.715 11474 INFO sqlalchemy.engine.base.Engine [-]
> > >> ROLLBACK 2014-09-22 11:38:05.784 11474 ERROR
> > >> neutron.openstack.common.db.sqlalchemy.session [-] DB exception
> > >> wrapped.
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session Traceback (most recent
> > >> call
> > >> last):
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session   File
> > >> "/usr/lib/python2.7/dist-
> > >> packages/neutron/openstack/common/db/sqlalchemy/session.py", line 524,
> > >> in
> > >> _wrap
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session return f(*args,
> > >> **kwargs) 2014-09-22 11:38:05.784 11474 TRACE
> >  
> >  From looking up the code, it looks like you are using Havana [1]. The
> > 
> > code in the master branch of Neutron now uses oslo.db, not
> > neutron.openstack.common.db, so this issue may have been resolved in
> > later versions of Neutron.
> 
> Yes, I'm using havana and I have now no possibility to upgrade it fast to
> icehouse (about master branch I even don't want to think :)). Do You want to
> tell me that this problem will be existing in havana and this can't be
> fixed in that release?
> 
> > [1]
> > https://github.com/openstack/neutron/blob/stable/havana/neutron/openstack/
> > co mmon/db/sqlalchemy/session.py#L524
> > 
> > >> neutron.openstack.common.db.sqlalchemy.session   File
> > >> "/usr/lib/python2.7/dist-
> > >> packages/neutron/openstack/common/db/sqlalchemy/session.py", line 718,
> > >> in
> > >> flush 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session return
> > >> super(Session,
> > >> self).flush(*args, **kwargs)
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session   File
> > >> "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line
> > >> 1818,
> > >> in
> > >> flush
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session self._flush(objects)
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session   File
> > >> "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line
> > >> 1936,
> > >> in
> > >> _flush
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session
> > >> transaction.rollback(_capture_exception=True)
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session   File
> > >> "/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line
> > >> 58, in __exit__
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session
> > >> compat.reraise(exc_type,
> > >> exc_value, exc_tb)
> > >> 2014-09-22 11:38:05.784 11474 TRACE
> > >> neutron.openstack.common.db.sqlalchemy.session   File
> > >> "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line
> 

[Openstack-operators] Default security group for all tenant

2014-09-23 Thread Sławek Kapłoński
Hello,

Is it possible to add "default" security group with defined rules to all 
instances and all groups? I'm thinking about group with rules that user can't 
change and only admin can. For example to block some connections for all 
users.

---
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

signature.asc
Description: This is a digitally signed message part.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] AUDIT VCPUS: -30 ?

2014-09-23 Thread Stephen Cousins
I have a Grizzly system and I'm trying to figure out why VM's aren't able
to be migrated (using "nova live-migration $UUID $NODE") from one node to
another. The error message on the node that it is being migrated from is:

 ERROR nova.virt.libvirt.driver [-] [instance:
0522b23c-5c2d-4c45-a66b-24c4c3f4ba9c] Live Migration failure: internal
error process exited while connecting to monitor: W: kvm binary is
deprecated, please use qemu-system-x86_64 instead

There is no message on the node that it is supposed to be migrating to.

It was working fine for a while and then it fails. While investigating, I
see AUDIT messages in nova-compute.log:

2014-09-23 18:00:45.274 4608 AUDIT nova.compute.resource_tracker [-] Free
ram (MB): 80797
2014-09-23 18:00:45.274 4608 AUDIT nova.compute.resource_tracker [-] Free
disk (GB): 31369
2014-09-23 18:00:45.274 4608 AUDIT nova.compute.resource_tracker [-] Free
VCPUS: -30

The system has 16 cores and has "cpu_allocation_ratio=8.0" in nova.conf so
it should have a capacity of 128 VCPU's (right?). Checking with nova-manage:

root@compute3:~# nova-manage service describe_resources test3
HOST  PROJECT cpu mem(mb) hdd
test3   (total)16  128925   32330
test3   (used_now) 46   48128 960
test3   (used_max) 46   47104 960
.
.
.

It looks like it is calculating Free VCPUS by subtracting "used_now" from
"total": 16 - 46 = -30. Is it somehow using this to decide that the node
should not take more VM's? If so, I don't know why it allowed it to get as
low as -30.

Can anyone explain what is going on? Is there other information I can look
at to diagnose why the live-migration is failing?

Thanks a lot,

Steve

-- 

 Steve Cousins Supercomputer Engineer/Administrator
 Advanced Computing GroupUniversity of Maine System
 244 Neville Hall (UMS Data Center)  (207) 561-3574
 Orono ME 04469  steve.cousins at maine.edu
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Renaming a compute node

2014-09-23 Thread Mathieu Gagné

Hi guys,

Lets say I wish to rename a compute node. How should I proceed?

Is there someone will a script lying around for that purpose? =)

BTW, I found a bunch of values in the database but I'm confused: some 
refer to the hostname, others are the FQDN. I never figured what's the 
best practice: should everything refer to the FQDN or the hostname?


--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators