Re: [Openstack-operators] need input on log translations

2017-03-11 Thread Sławek Kapłoński
Hello,

I'm Polish speaker and I never even though about translated logs. IMHO
english is so common language in IT that probably for most of us using
english instead of native language is perfectly fine.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On sob, 11 mar 2017, George Shuklin wrote:

> Whole idea with log translation is half-backed anyway. About the half of
> important log messages contain output of things outside openstack. Libvirt,
> ip, sudo, kernel, etc. In any i18n installation there going to be some
> amount of untranslated messages. This kills whole idea of localization.
> 
> Modern operator ought to know English at 'technical reading' level anyway.
> Therefore, localization does not achieve it goal, but cause pain instead:
> search segmentation, slightly misleading translation (f.e. 'stream' and
> 'thread' both translate into Russian 'поток', which brings ambiguity),
> different system may use slightly different translation, causing even more
> mess.
> 
> As Russian speaker and openstack operator I definitely don't want to have
> logs translation.
> 
> On Mar 10, 2017 4:42 PM, "Doug Hellmann" <d...@doughellmann.com> wrote:
> 
> There is a discussion on the -dev mailing list about the i18n team
> decision to stop translating log messages [1]. The policy change means
> that we may be able to clean up quite a lot of "clutter" throughout the
> service code, because without anyone actually translating the messages
> there is no need for the markup code used to tag those strings.
> 
> If we do remove the markup from log messages, we will be effectively
> removing "multilingual logs" as a feature. Given the amount of work
> and code churn involved in the first roll out, I would not expect
> us to restore that feature later.
> 
> Therefore, before we take what would almost certainly be an
> irreversible action, we would like some input about whether log
> message translations are useful to anyone. Please let us know if
> you or your customers use them.
> 
> Thanks,
> Doug
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-
> March/113365.html
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Live snapshots issue

2016-08-12 Thread Sławek Kapłoński
Hello,

I’m using Ubuntu 14.04 with qemu 2.3 and libvirt 1.3.1. Last time I also 
upgraded kernel to 4.4 version (from kernel.ubuntu.com). After this upgrade I 
have an issue with live_snapshots in libvirt.
When nova makes live-snapshot, libvirt should add new rule to apparmor rules 
for instance and then start „qemu drive-mirror” operation. Problem is that rule 
added to apparmor is not „visible” by kernel. 
When I added similar rule manually I had to reboot hard instance to apply new 
rule (even if I made apparmor_parser -r …). Similar issue is when libvirt is 
addind new rule to apparmor.
Because of this issue qemu don’t have permission to open „delta” file and there 
is error.
Same issue not happen on kernel 3.13 – rule added by libvirt is normally 
visible by apparmor and snapshot is created properly.
Maybe someone of You got already such issue and maybe You know any solution to 
fix that?

Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Updating flavor quotas (e.g. disk_iops) on existing instances.

2016-07-13 Thread Sławek Kapłoński
Hello,

Yes, You're right that make it with virsh will work only until next hard
reboot/stop/suspend/migrate of instance but I thought only about
changing flavor extra-specs and apply it to running vms with virsh :)
Also I think that updating flavor extra-specs can be probably done also
through nova API. Please check "nova flavor-key" command.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Wed, 13 Jul 2016, Van Leeuwen, Robert wrote:

> >> Since the instance_extra flavor table is a big JSON blob it is a pain to 
> >> apply changes there.
> >> Anybody found an easy way to do this?
> > If You are using virsh, You can apply such limits manually for each
> > instance. Check blkiotune command in virsh.
> 
> Using virsh is only for the running instances until the next reboot.
> This is because OpenStack will re-create the .xml file when you reboot an 
> instance and any “local changes” made by virsh will be gone.
> 
> As I mentioned this info is not re-read from the flavor when the XML is 
> created but it is stored per-instance in the instance_extra.flavor table.
> Luckily you can just overwrite the instance_extra.flavor with one with a 
> quota applied to it and you do not need to parse and modify the JSON-BLOB. 
> (the JSON-BLOB does not seem to contain unique data for the instance)
> 
> For future reference, the sql query will look something like this:
> update instance_extra set flavor='BIG JSON BLOB HERE' where instance_uuid IN 
> ( SELECT uuid from instances where instances.instance_type_id='10' and 
> instances.deleted='0') ;
> 
> This will update all active instances with the flavor-id of 10 (note that 
> this flavor-id is an auto-increment id and not the flavor-id you use when 
> creating a flavor)
> You can get the “JSON BLOB” from an instance which was created with the new  
> extra_specs settings applied to it.
> This setting will only be applied when you (hard)reboot the instance.
> When you want to instances without rebooting them you will ALSO need to to 
> the virsh blkiotune part.
> 
> I’d gladly hear to any suggestions/tools for an easier way to do it.
> 
> Cheers,
> Robert van Leeuwen
> 


signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Updating flavor quotas (e.g. disk_iops) on existing instances.

2016-07-12 Thread Sławek Kapłoński
Hello,

If You are using virsh, You can apply such limits manually for each
instance. Check blkiotune command in virsh.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Tue, 12 Jul 2016, Van Leeuwen, Robert wrote:

> Hi,
> 
> Is there an easy way to update the quotas for flavors and apply it to 
> existing instances?
> It looks like these settings are tracked in the “instance_extra” table and 
> not re-read from the flavor when (hard) re-booting the instances.
> 
> Since the instance_extra flavor table is a big JSON blob it is a pain to 
> apply changes there.
> Anybody found an easy way to do this?
> 
> Thx,
> Robert van Leeuwen

> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Bandwidth limitations

2016-07-03 Thread Sławek Kapłoński
Hello,

We are also using it even in Juno. There is one problem with outgoing
traffic limitation made this way if You are using openvswitch also.
This limit is set by libvirt using tc as policing on ingress qdisc.
Openvswitch is using same policing on ingress qdisc if You set
'ingress_policing_rate' for interface. Problem is that if You spawn new
vm, libvirt configures bandwidth limit with tc but then neutron ovs
agent wants to configure port and is setting vlan_tag in this "tap"
interface. After that operation ovs will clear policing on ingress qdisc
because in ovsdb there is no any info about ingress_policing_rate which
should be set. Details can be found on
http://openvswitch.org/pipermail/discuss/2013-July/010584.html

We solved it by setting libvirt qemu hook
(https://www.libvirt.org/hooks.html) which sets ingress_policing_rate
and ingress_policing_burst in ovs with same values which are in
instance description in libvirt.

About neutron QoS, for now it is possible only to set max bandwidth
limits for traffic outgoing from vm. There is no possiblity to set limit
for incoming traffic.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Wed, 29 Jun 2016, Matt Fischer wrote:

> We've been using this for some time now (since at least Kilo). We set them
> per flavor not per instance.
> 
> https://wiki.openstack.org/wiki/InstanceResourceQuota
> 
> Bandwidth limits
> 
> Nova Extra Specs keys:
> 
>- vif_inbound_average
>- vif_outbound_average
>- vif_inbound_peak
>- vif_outbound_peak
>- vif_inbound_burst
>- vif_outbound_burst
> 
> 
> 
> On Wed, Jun 29, 2016 at 10:36 AM, Daniel Levy <dl...@us.ibm.com> wrote:
> 
> > Hi all,
> > I'd like to learn about potential solutions anyone out there is using for
> > bandwidth limitations on VMs. Potentially applying QOS (quality of service)
> > rules on the VM ports in an automated fashion.
> > If there are no current solutions, I might submit a blue print to tackle
> > this issue
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >

> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Storage backend for glance

2016-01-27 Thread Sławek Kapłoński
Hello,

I want to install Openstack with at least two glance nodes (to have HA)
but with local filesystem as glance storage. Is it possible to use
something like that in setup with two glance nodes? Maybe someone of You
already have something like that?
I'm asking because AFAIK is image will be stored on one glance server
and nova-compute will ask other glance host to download image then image
will not be available to download and instance will be in ERROR state.
So maybe someone used it somehow in similar setup (maybe some NFS or
something like that?). What are You experience with it?

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl



signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Neutron][Linuxbridge] Problem with configuring linux bridge agent with vxlan networks

2015-10-03 Thread Sławek Kapłoński
Hello,

I'm configuring it manually. DHCP is not working because vxlan tunnels
are not working at all :/
Compute nodes and network nodes con ping each other:

admin@network:~$ ping 10.1.0.4
PING 10.1.0.4 (10.1.0.4) 56(84) bytes of data.
64 bytes from 10.1.0.4: icmp_seq=1 ttl=64 time=8.83 ms
64 bytes from 10.1.0.4: icmp_seq=2 ttl=64 time=0.282 ms
^C
--- 10.1.0.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.282/4.560/8.838/4.278 ms


-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Sat, 03 Oct 2015, James Denton wrote:

> Are your instances getting their ip from DHCP server or are you manually 
> configuring them? Can the network node ping the compute node at 10.1.0.4 and 
> vice-versa?
> 
> Sent from my iPhone
> 
> > On Oct 3, 2015, at 3:55 AM, Sławek Kapłoński <sla...@kaplonski.pl> wrote:
> > 
> > This vlan bridge_mapping I set just to be sure if it will not help for
> > some reason :) Before I tested it without this mapping configured. And
> > in fact I'm not using vlan networks at all (at least now) - I only want
> > to have local vxlan network between instances :)
> > When I booted one instance on host in brqXXX bridge I got vxlan-10052
> > port and tapXXX port (10052 is vni used assigned to network in neutron).
> > After boot second vm I got in same bridge second tap interface so it
> > looks like:
> > 
> > root@compute-2:~# brctl show
> > bridge namebridge idSTP enabledinterfaces
> > brq8fe8a32f-e68000.ce544d0c0e5dnotap691a138a-6c
> >tapbc1e5179-53
> >vxlan-10052
> > virbr08000.5254007611abyesvirbr0-nic
> > 
> > 
> > So it looks fine for me. I have no idea what is this vibr0 bridge - maybe it
> > should be used somehow?
> > 
> > One more think. Those two vms on one host are pinging each other. So bridge
> > looks that is working fine. Problem is with vxlan tunnels.
> > 
> > About security groups: by default there is rule to allow traffic from 
> > different
> > vms using same SG. All my instances are using same security group so it 
> > should
> > be no problem IMHO.
> > 
> > -- 
> > Best regards / Pozdrawiam
> > Sławek Kapłoński
> > sla...@kaplonski.pl
> > 
> >> On Fri, 02 Oct 2015, James Denton wrote:
> >> 
> >> If eth1 is used for the vxlan tunnel end points, it can't also be used in 
> >> a bridge ala provider_bridge_mappings. You should have a dedicated 
> >> interface or a vlan interface off eth1 (i.e. Eth1.20) that is dedicated to 
> >> the overlay traffic. Move the local_ip address to that interface on 
> >> respective nodes. Verify that you can ping between nodes at each address. 
> >> If this doesn't work, the Neutron pieces won't work. You shouldn't have to 
> >> restart any neutron services, since the IP isn't changing.
> >> 
> >> Once you create a vxlan tenant network and boot some instances, verify 
> >> that the vxlan interface is being setup and placed in the respective 
> >> bridge. You can use 'brctl show' to look at the brq bridge that 
> >> corresponds to the network. You should see a vxlan interface and the tap 
> >> interfaces of your instances. 
> >> 
> >> As always, verify your security groups first when troubleshooting instance 
> >> to instance communication.
> >> 
> >> James
> >> 
> >> Sent from my iPhone
> >> 
> >>> On Oct 2, 2015, at 3:48 PM, Sławek Kapłoński <sla...@kaplonski.pl> wrote:
> >>> 
> >>> Hello,
> >>> 
> >>> I'm trying to configure small openstack infra (one network node, 2
> >>> compute nodes) with linux bridge and vxlan tenant networks. I don't know
> >>> what I'm doing wrong but my instances have no connection between
> >>> each other. On compute hosts I run neutron-plugin-linuxbrigde-agent
> >>> with config like:
> >>> 
> >>> --
> >>> [ml2_type_vxlan]
> >>> # (ListOpt) Comma-separated list of : tuples
> >>> # enumerating
> >>> # ranges of VXLAN VNI IDs that are available for tenant network
> >>> # allocation.
> >>> #
> >>> vni_ranges = 1:2
> >>> 
> >>> # (StrOpt) Multicast group for the VXLAN interface. When configured,
> >>> # will
> >>> # enable sending all broadcast traffic to this multicast group. When
> >&g

[Openstack-operators] [Neutron][Linuxbridge] Problem with configuring linux bridge agent with vxlan networks

2015-10-02 Thread Sławek Kapłoński
Hello,

I'm trying to configure small openstack infra (one network node, 2
compute nodes) with linux bridge and vxlan tenant networks. I don't know
what I'm doing wrong but my instances have no connection between
each other. On compute hosts I run neutron-plugin-linuxbrigde-agent
with config like:

--
[ml2_type_vxlan]
# (ListOpt) Comma-separated list of : tuples
# enumerating
# ranges of VXLAN VNI IDs that are available for tenant network
# allocation.
#
vni_ranges = 1:2

# (StrOpt) Multicast group for the VXLAN interface. When configured,
# will
# enable sending all broadcast traffic to this multicast group. When
# left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

[securitygroup]
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
enable_security_group = True

# Use ipset to speed-up the iptables security groups. Enabling ipset
# support
# requires that ipset is installed on L2 agent node.
enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[ovs]
local_ip = 10.1.0.4

[agent]
tunnel_types = vxlan

[linuxbridge]
physical_interface_mappings = physnet1:eth1

[vxlan]
local_ip = 10.1.0.4
l2_population = True
enable_vxlan = True
---

Eth1 is my "tunnel network" which should be used for tunnels. When I
spawn vms on compute 1 and 2 and after configuring network manually on
both vms (dhcp is not working also because of broken tunnels probably)
it not pings.
Even when I started two instances on same host and they are both
connected to one bridge:

---
root@compute-2:/usr/lib/python2.7/dist-packages/neutron# brctl show
bridge name bridge id   STP enabled interfaces
brq8fe8a32f-e6  8000.ce544d0c0e5d   no  tap691a138a-6c
tapbc1e5179-53
vxlan-10052
virbr0  8000.5254007611ab   yes virbr0-nic
---

those 2 vms are not pinging each other :/
I don't have any expeirence with linux bridge in fact (For now I was always
using ovs). Maybe someone of You will know what I should check or what I should
configure wrong :/ Generally I was installing this openstack according to
official openstack documentation but in this docs there is info about ovs+gre
tunnels and that is what I changed. I'm using Ubuntu 14.04 and Openstack Kilo
installed from cloud archive repo.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl



signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Neutron][Linuxbridge] Problem with configuring linux bridge agent with vxlan networks

2015-10-02 Thread Sławek Kapłoński
Hello,

Yes. I was reading mostly
http://docs.openstack.org/networking-guide/scenario_legacy_lb.html
because imho this is what I want now :). I don't want DVR and L3 HA (at
least for now) but only traffic via vxlan between two vms which are
connected to tenant network (it is probably called east-west traffic,
yes?)

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Fri, 02 Oct 2015, Matt Kassawara wrote:

> Did you review the scenarios in the networking guide [1]?
> 
> [1] http://docs.openstack.org/networking-guide/deploy.html
> 
> On Fri, Oct 2, 2015 at 2:41 PM, Sławek Kapłoński <sla...@kaplonski.pl>
> wrote:
> 
> > Hello,
> >
> > I'm trying to configure small openstack infra (one network node, 2
> > compute nodes) with linux bridge and vxlan tenant networks. I don't know
> > what I'm doing wrong but my instances have no connection between
> > each other. On compute hosts I run neutron-plugin-linuxbrigde-agent
> > with config like:
> >
> > --
> > [ml2_type_vxlan]
> > # (ListOpt) Comma-separated list of : tuples
> > # enumerating
> > # ranges of VXLAN VNI IDs that are available for tenant network
> > # allocation.
> > #
> > vni_ranges = 1:2
> >
> > # (StrOpt) Multicast group for the VXLAN interface. When configured,
> > # will
> > # enable sending all broadcast traffic to this multicast group. When
> > # left
> > # unconfigured, will disable multicast VXLAN mode.
> > #
> > # vxlan_group =
> > # Example: vxlan_group = 239.1.1.1
> >
> > [securitygroup]
> > # Controls if neutron security group is enabled or not.
> > # It should be false when you use nova security group.
> > enable_security_group = True
> >
> > # Use ipset to speed-up the iptables security groups. Enabling ipset
> > # support
> > # requires that ipset is installed on L2 agent node.
> > enable_ipset = True
> >
> > firewall_driver =
> > neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
> >
> > [ovs]
> > local_ip = 10.1.0.4
> >
> > [agent]
> > tunnel_types = vxlan
> >
> > [linuxbridge]
> > physical_interface_mappings = physnet1:eth1
> >
> > [vxlan]
> > local_ip = 10.1.0.4
> > l2_population = True
> > enable_vxlan = True
> > ---
> >
> > Eth1 is my "tunnel network" which should be used for tunnels. When I
> > spawn vms on compute 1 and 2 and after configuring network manually on
> > both vms (dhcp is not working also because of broken tunnels probably)
> > it not pings.
> > Even when I started two instances on same host and they are both
> > connected to one bridge:
> >
> > ---
> > root@compute-2:/usr/lib/python2.7/dist-packages/neutron# brctl show
> > bridge name bridge id   STP enabled interfaces
> > brq8fe8a32f-e6  8000.ce544d0c0e5d   no
> > tap691a138a-6c
> > tapbc1e5179-53
> > vxlan-10052
> > virbr0  8000.5254007611ab   yes virbr0-nic
> > ---
> >
> > those 2 vms are not pinging each other :/
> > I don't have any expeirence with linux bridge in fact (For now I was always
> > using ovs). Maybe someone of You will know what I should check or what I
> > should
> > configure wrong :/ Generally I was installing this openstack according to
> > official openstack documentation but in this docs there is info about
> > ovs+gre
> > tunnels and that is what I changed. I'm using Ubuntu 14.04 and Openstack
> > Kilo
> > installed from cloud archive repo.
> >
> > --
> > Best regards / Pozdrawiam
> > Sławek Kapłoński
> > sla...@kaplonski.pl
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >


signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Allow user to see instances of other users

2015-06-12 Thread Sławek Kapłoński
Hello,

I don't know if such solution will work properly. I don't have possibility to 
check it now :/

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia czwartek, 11 czerwca 2015 18:28:57 Mathieu Gagné pisze:
 haha, you are right.
 
 Should this also be changed so you don't end up with admin privileges
 on all tenants?
 
 From:
 
   admin_or_owner:  is_admin:True or project_id:%(project_id)s,
 
 To:
 
   admin_or_owner:  role:admin or project_id:%(project_id)s,
 
 Note: I'm trying to find a temporary way to no have to wait for Nova to
 remove all occurrences of if not context.is_admin.
 
 Mathieu
 
 On 2015-06-11 6:13 PM, Sławek Kapłoński wrote:
  Hello,
  
  But AFAIK this will add someone with role special_role same priviliges
  as
  someone who has got admin role, right?
  
  --
  Pozdrawiam / Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
  
  Dnia czwartek, 11 czerwca 2015 18:08:38 Mathieu Gagné pisze:
  You can add your new role to this policy:
context_is_admin:  role:admin or role:special_role,
  
  It will set is_admin to True in the context. I'm not sure of the
  side-effect to be honest. Use at your own risk...
  
  Mathieu
  
  On 2015-06-11 4:59 PM, George Shuklin wrote:
  Thank you!
  
  You saved me a day of the work. Well, we'll move a script to admin user
  instead of normal user with the special role.
  
  PS And thanks for filling a bugreport too.
  
  On 06/11/2015 10:40 PM, Sławek Kapłoński wrote:
  Hello,
  
  I don't think it is possible because in nova/db/sqlalchemy/api.py in
  function instance_get_all_by_filters You have something like:
  
  if not context.is_admin:
  # If we're not admin context, add appropriate filter..
  
  if context.project_id:
  filters['project_id'] = context.project_id
  
  else:
  filters['user_id'] = context.user_id
  
  This is from Juno, but in Kilo it is the same. So in fact even if You
  will set proper policy.json rules it will still require admin context
  to
  search instances from different tenants. Maybe I'm wrong and this is in
  some other place possible and maybe someone will show me where because
  I
  was also looking for it last time :)
  
  --
  Pozdrawiam / Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
  
  Dnia czwartek, 11 czerwca 2015 21:06:31 George Shuklin pisze:
  Hello.
  
  I'm trying to allow a user with special role to see all instances of
  all
  tenants without giving him admin privileges.
  
  My initial attempt was to change policy.json for nova to
  compute:get_all_tenants: role:special_role or is_admin:True.
  
  But it didn't work well.
  
  The command (nova list --all-tenants) is not failing anymore (no
  'ERROR
  (Forbidden): Policy doesn't allow compute:get_all_tenants to be
  performed.'), but the returned list is empty:
  
  nova list  --all-tenants
  ++--+++-+--+
  
  | ID | Name | Status | Task State | Power State | Networks |
  
  ++--+++-+--+
  ++--+++-+--+
  
  
  Any ideas how to allow a user without admin privileges to see all
  instances?
  
  
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator
  s
  
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator
  s
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Allow user to see instances of other users

2015-06-11 Thread Sławek Kapłoński
Hello,

I don't think it is possible because in nova/db/sqlalchemy/api.py in function 
instance_get_all_by_filters You have something like:

if not context.is_admin:
# If we're not admin context, add appropriate filter..
if context.project_id:
filters['project_id'] = context.project_id
else:
filters['user_id'] = context.user_id

This is from Juno, but in Kilo it is the same. So in fact even if You will set 
proper policy.json rules it will still require admin context to search 
instances from different tenants. Maybe I'm wrong and this is in some other 
place possible and maybe someone will show me where because I was also looking 
for it last time :)

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia czwartek, 11 czerwca 2015 21:06:31 George Shuklin pisze:
 Hello.
 
 I'm trying to allow a user with special role to see all instances of all
 tenants without giving him admin privileges.
 
 My initial attempt was to change policy.json for nova to
 compute:get_all_tenants: role:special_role or is_admin:True.
 
 But it didn't work well.
 
 The command (nova list --all-tenants) is not failing anymore (no 'ERROR
 (Forbidden): Policy doesn't allow compute:get_all_tenants to be
 performed.'), but the returned list is empty:
 
 nova list  --all-tenants
 ++--+++-+--+
 
 | ID | Name | Status | Task State | Power State | Networks |
 
 ++--+++-+--+
 ++--+++-+--+
 
 
 Any ideas how to allow a user without admin privileges to see all instances?
 
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

signature.asc
Description: This is a digitally signed message part.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Allow user to see instances of other users

2015-06-11 Thread Sławek Kapłoński
Hello,

I thought so but I was not sure :)
I just made bug report for that: https://bugs.launchpad.net/nova/+bug/1464381


--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia czwartek, 11 czerwca 2015 13:02:16 Clint Byrum pisze:
 Excerpts from Sławek Kapłoński's message of 2015-06-11 12:40:36 -0700:
  Hello,
  
  I don't think it is possible because in nova/db/sqlalchemy/api.py in
  function instance_get_all_by_filters You have something like:
  
  if not context.is_admin:
  # If we're not admin context, add appropriate filter..
  
  if context.project_id:
  filters['project_id'] = context.project_id
  
  else:
  filters['user_id'] = context.user_id
  
  This is from Juno, but in Kilo it is the same. So in fact even if You will
  set proper policy.json rules it will still require admin context to
  search instances from different tenants. Maybe I'm wrong and this is in
  some other place possible and maybe someone will show me where because I
  was also looking for it last time :)
 
 Looks like a bug to me. The check should just enforce that there is one
 of those filters if not context.is_admin.
 
 https://launchpad.net/nova/+filebug
 
 I'd suggest referencing this mailing list thread.
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

signature.asc
Description: This is a digitally signed message part.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Allow user to see instances of other users

2015-06-11 Thread Sławek Kapłoński
Hello,

But AFAIK this will add someone with role special_role same priviliges as 
someone who has got admin role, right?

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia czwartek, 11 czerwca 2015 18:08:38 Mathieu Gagné pisze:
 You can add your new role to this policy:
 
   context_is_admin:  role:admin or role:special_role,
 
 It will set is_admin to True in the context. I'm not sure of the
 side-effect to be honest. Use at your own risk...
 
 Mathieu
 
 On 2015-06-11 4:59 PM, George Shuklin wrote:
  Thank you!
  
  You saved me a day of the work. Well, we'll move a script to admin user
  instead of normal user with the special role.
  
  PS And thanks for filling a bugreport too.
  
  On 06/11/2015 10:40 PM, Sławek Kapłoński wrote:
  Hello,
  
  I don't think it is possible because in nova/db/sqlalchemy/api.py in
  function instance_get_all_by_filters You have something like:
  
  if not context.is_admin:
  # If we're not admin context, add appropriate filter..
  
  if context.project_id:
  filters['project_id'] = context.project_id
  
  else:
  filters['user_id'] = context.user_id
  
  This is from Juno, but in Kilo it is the same. So in fact even if You
  will set proper policy.json rules it will still require admin context to
  search instances from different tenants. Maybe I'm wrong and this is in
  some other place possible and maybe someone will show me where because I
  was also looking for it last time :)
  
  --
  Pozdrawiam / Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
  
  Dnia czwartek, 11 czerwca 2015 21:06:31 George Shuklin pisze:
  Hello.
  
  I'm trying to allow a user with special role to see all instances of all
  tenants without giving him admin privileges.
  
  My initial attempt was to change policy.json for nova to
  compute:get_all_tenants: role:special_role or is_admin:True.
  
  But it didn't work well.
  
  The command (nova list --all-tenants) is not failing anymore (no 'ERROR
  (Forbidden): Policy doesn't allow compute:get_all_tenants to be
  performed.'), but the returned list is empty:
  
  nova list  --all-tenants
  ++--+++-+--+
  
  | ID | Name | Status | Task State | Power State | Networks |
  
  ++--+++-+--+
  ++--+++-+--+
  
  
  Any ideas how to allow a user without admin privileges to see all
  instances?
  
  
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-14 Thread Sławek Kapłoński
Hello,

So if I understand You correct, it is not so dangeorus if I'm using
ibvirt with apparmor and this libvirt is adding apparmor rules for
every qemu process, yes?

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Wed, May 13, 2015 at 04:01:05PM +0100, Daniel P. Berrange wrote:
 On Wed, May 13, 2015 at 02:31:26PM +, Tim Bell wrote:
  
  Looking through the details of the Venom vulnerability,
  https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/, it
  would appear that the QEMU processes need to be restarted.
  
  Our understanding is thus that a soft reboot of the VM is not sufficient
  but a hard one would be OK.
 
 Yes, the key requirement is that you get a new QEMU process running. So
 this means a save-to-disk followed by restore, or a shutdown + boot,
 or a live migration to another (patched) host.
 
 In current Nova code a hard reboot operation will terminate the QEMU
 process and then start it again, which is the same as shutdown + boot
 really. A soft reboot will also terminate the QEMU process and start
 it again, when when terminating it, it will try to do so gracefully.
 ie init gets a chance todo a orderly shutdown of services. A soft
 reboot though is not guaranteed to ever finish / happen, since it
 relies on co-operating guest OS to respond to the ACPI signal. So
 soft reboot is probably not a reliable way of guaranteeing you get
 a new QEMU process.
 
 My recommendation would be a live migration, or save to disk and restore
 though, since those both minimise interruption to your guest OS workloads
 where as a hard reboot or shutdown obviously kills them.
 
 
 Also note that this kind of bug in QEMU device emulation is the poster
 child example for the benefit of having sVirt (either SELinux or AppArmor
 backends) enabled on your compute hosts. With sVirt, QEMU is restricted
 to only access resources that have been explicitly assigned to it. This
 makes it very difficult (likely/hopefully impossible[1]) for a compromised
 QEMU to be used to break out to compromise the host as a whole, likewise
 protect against compromising other QEMU processes on the same host. The
 common Linux distros like RHEL, Fedora, Debian, Ubuntu, etc all have
 sVirt feature available and enabled by default and OpenStack doesn't
 do anything to prevent it from working. Hopefully no one is actively
 disabling it themselves leaving themselves open to attack...
 
 Finally QEMU processes don't run as root by default, they use a
 'qemu' user account with minimal privileges, which adds another layer
 of protection against total host compromise
 
 So while this bug is no doubt serious and worth patching asap, IMHO,
 it is not the immediate end of the world scale disaster that some
 are promoting it to be.
 
 
 NB, this mail is my personal analysis of the problem - please refer
 to the above linked redhat.com blog post and/or CVE errata notes,
 or contact Red Hat support team, for the official Red Hat view on
 this.
 
 Regards,
 Daniel
 
 [1] I'll never claim anything is 100% foolproof, but it is intended to
 to be impossible to escape sVirt, so any such viable escape routes
 would themselves be considered security bugs.
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] improve perfomance Neutron VXLAN

2015-01-22 Thread Sławek Kapłoński
As I wrote earlier, for me it is best to have 9000 on hosts and 8950 on 
instances. Then I have full speed between instances. With lower mtu on 
instances I have about 2-2.5 Gbps and I saw that vhost-net process on 
host is using 100 of 1 cpu core. I'm using libvirt with kvm - maybe You 
are using something else and it will be different on Your hosts.


Slawek Kaplonski


W dniu 22.01.2015 o 20:45, Pedro Sousa pisze:

Hi Slawek,

I've tried several options but that one that seems to work better is MTU
1450 on VM and MTU 1600 on the host. With MTU 1400 on the VM I would get
freezes and timeouts.

Still I get about 2.2Gbit/Sec while in the host I get 9 Gbit/Sec, do you
think is normal?

Thanks,
Pedro Sousa




On Thu, Jan 22, 2015 at 7:32 PM, Sławek Kapłoński sla...@kaplonski.pl
mailto:sla...@kaplonski.pl wrote:

Hello,

In dnsmasq file in neutron will be ok. It will then force option 26
on vm.
You can also manually change it on vms to tests.

Slawek Kaplonski

W dniu 22.01.2015 o 17:06, Pedro Sousa pisze:

Hi Slawek,

I'll test this, did you change the mtu on dnsmasq file in
/etc/neutron/?
Or do you need to change on other places too?

Thanks,
Pedro Sousa

On Wed, Jan 21, 2015 at 4:26 PM, Sławek Kapłoński
sla...@kaplonski.pl mailto:sla...@kaplonski.pl
mailto:sla...@kaplonski.pl mailto:sla...@kaplonski.pl wrote:

 I have similar and I also got something like 2-2,5Gbps
between vms.
 When I
 change it to 8950 on vms (so in neutron conf) (50 less then on
 hosts) then it
 is much better.
 You can check that probably when You make test between vms
on host
 there is
 process called vhost-net (or something like that) and it
uses 100%
 of one cpu
 core and that is imho bottleneck

 Slawek Kaplonski

 On Wed, Jan 21, 2015 at 04:12:02PM +, Pedro Sousa wrote:
   Hi Slawek,
  
   I have dhcp-option-force=26,1400 in neutron-dnsmasq.conf and
 MTU=9000 on
   network-interfaces in the operating system.
  
   Do I need to change somewhere else?
  
   Thanks,
   Pedro Sousa
  
   On Wed, Jan 21, 2015 at 4:07 PM, Sławek Kapłoński
 sla...@kaplonski.pl mailto:sla...@kaplonski.pl
mailto:sla...@kaplonski.pl mailto:sla...@kaplonski.pl

   wrote:
  
Hello,
   
Try to set bigger jumbo framse on hosts and vms. For
example on
 hosts You
can
set 9000 and then 8950 and check then. It helps me
with similar
 problem.
   
Slawek Kaplonski
   
On Wed, Jan 21, 2015 at 03:22:50PM +, Pedro Sousa
wrote:
 Hi all,

 is there a way to improve network performance on my
instances
 with
VXLAN? I
 changed the MTU on physical interfaces to 1600, still
 performance it's
 lower than in baremetal hosts:

 *On Instance:*

 [root@vms6-149a71e8-1f2a-4d6e-__bba4-e70dfa42b289
~]# iperf3 -s

--__-
 Server listening on 5201

--__-
 Accepted connection from 10.0.66.35, port 42900
 [  5] local 10.0.66.38 port 5201 connected to
10.0.66.35 port
 42901
 [ ID] Interval   Transfer Bandwidth
 [  5]   0.00-1.00   sec   189 MBytes  1.59 Gbits/sec
 [  5]   1.00-2.00   sec   245 MBytes  2.06 Gbits/sec
 [  5]   2.00-3.00   sec   213 MBytes  1.78 Gbits/sec
 [  5]   3.00-4.00   sec   227 MBytes  1.91 Gbits/sec
 [  5]   4.00-5.00   sec   235 MBytes  1.97 Gbits/sec
 [  5]   5.00-6.00   sec   235 MBytes  1.97 Gbits/sec
 [  5]   6.00-7.00   sec   234 MBytes  1.96 Gbits/sec
 [  5]   7.00-8.00   sec   235 MBytes  1.97 Gbits/sec
 [  5]   8.00-9.00   sec   244 MBytes  2.05 Gbits/sec
 [  5]   9.00-10.00  sec   234 MBytes  1.97 Gbits/sec
 [  5]  10.00-10.04  sec  9.30 MBytes  1.97 Gbits/sec
 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval   Transfer Bandwidth
  Retr
 [  5]   0.00-10.04  sec  2.25 GBytes  1.92
Gbits/sec   43
 sender

Re: [Openstack-operators] [glance] how to update the contents of an image

2014-10-07 Thread Sławek Kapłoński
Hello,

Yes, I agree that this is not big problem when there is info Image not found 
in horizon but I saw this discussion and I thought that I will ask about that 
:) It would be nice to have some other info like for example: Image 1 
(archived) or something like that :)

---
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia wtorek 07 październik 2014 18:21:13 piszesz:
 I've never worried about Image not Found, as its only a UI concern. IMO
 it lets the users know something has changed. Totally optional, and the
 same effect can be gained by just renaming it -OLD and leaving it public.
 At some point, it still needs to be removed.
 
 On Tuesday, October 7, 2014, Sławek Kapłoński sla...@kaplonski.pl wrote:
  Hello,
  
  I use Your solution and I made old images as private in such change but
  then
  there is one more problem: all instances spawned from that old images are
  have
  in horizon info about image: not found.
  Do You maybe know how to solve that?
  
  ---
  Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl javascript:;
  
  Dnia wtorek, 7 października 2014 10:05:57 Abel Lopez pisze:
   You are correct, deleted images are not deleted from the DB, rather
   their
   row has ‘deleted=1’, so specifying the UUID of another image already in
   glance for a new image being upload will end in tears.
   
   What I was trying to convey was, when Christian is uploading a new image
  
  of
  
   the same name as an existing image, the UUID will be different. IMO, the
   correct process should be:
   1. Make desired changes to your image.
   2. Rename the existing image (e.g. Fedora-20-OLD)
   3. (optional) Make the old image private ( is-public 0 )
   4. Upload the new image using the desired name (e.g. Fedora-20 or like
   Fedora-20-LATEST )
   
   Obviously I assume there was testing for viability of the image before
   it
   was uploaded to glance.
   
   For more information, be sure to catch my talk on Tuesday 9am at the
  
  summit.
  
   On Oct 7, 2014, at 9:58 AM, George Shuklin george.shuk...@gmail.com
  
  javascript:; wrote:
As far as I know, it is not possible to assign uuid from deleted image
  
  to
  
the new one, because deleted images keeps their metadata in DB.

On 09/26/2014 04:43 PM, Abel Lopez wrote:
Glance images are immutable. In order to update it, you should do as
  
  you
  
are doing, but then rename the old image, then upload the updated
one.
Take note of the UUID as well.

On Friday, September 26, 2014, Christian Berendt 
  
  bere...@b1-systems.de javascript:;
  
wrote: I'm trying to update the contents of an image, but it looks
  
  like
  
it is not working at all.

First I upload a test image:

---snip---
# dd if=/dev/urandom of=testing.img bs=1M count=10
# glance image-create --disk-format raw --container-format bare
--name
TESTING --file testing.img
---snap---

Now I want to overwrite the contents of this image:

---snip---
# dd if=/dev/urandom of=testing.img bs=1M count=20
# glance image-update --file testing.img TESTING
---snap---

After this call the size of the image is still the same like before
(10485760 bytes).

I do not have issues in the logfiles of glance-api and
  
  glance-registry.
  
What am I doing wrong?

Is it not possible to update the contents of an image?

Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de javascript:;

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB
3537

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org javascript:;
  
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org javascript:;
  
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org javascript:;
  
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

signature.asc
Description: This is a digitally signed message part.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openstack and mysql galera with haproxy

2014-09-23 Thread Sławek Kapłoński
Hello,


Dnia poniedziałek, 22 września 2014 22:02:26 Sławek Kapłoński pisze:
 Hello,
 
 Answears below
 
 ---
 Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl
 
 Dnia poniedziałek, 22 września 2014 13:41:51 Jay Pipes pisze:
  Hi Peter, Sławek, answers inline...
  
  On 09/22/2014 08:12 AM, Peter Boros wrote:
   Hi,
   
   StaleDataError is not given by MySQL, but rather SQLAlchemy. After a
   quick look, it seems like SQLAlchemy gets this, if the update updated
   different number of rows then it expected. I am not sure what is the
   expectation based on, perhaps soembody can chime in and we can put
   this together. What is the transaction isolation level you are running
   on?
  
  The transaction isolation level is REPEATABLE_READ, unless Sławek has
  changed the defaults (unlikely).
 
 For sure I didn't change it
 
   For the timeout setting in neutron: that's a good way to approach it
   too, you can even be more agressive and set it to a few seconds. In
   MySQL making connections is very cheap (at least compared to other
   databases), an idle timeout of a few seconds for a connection is
   typical.
   
   On Mon, Sep 22, 2014 at 12:35 PM, Sławek Kapłoński sla...@kaplonski.pl
 
 wrote:
   Hello,
   
   Thanks for Your explanations. I thought so and now I decrease
   idle_connection_timeout in neutron and nova. Now when master server
   is
   back to cluster than in less than one minute all conections are again
   made to this master node becuase old connections which was made to
   backup node are closed. So for now it looks almost perfect but when I
   now testing cluster (with master node active and all connections
   established to this node) in neutron I still sometimes see errors like:
   StaleDataError: UPDATE statement on table 'ports' expected to update 1
   row(s); 0 were matched.
   
   and also today I found errors like:
   2014-09-22 11:38:05.715 11474 INFO sqlalchemy.engine.base.Engine [-]
   ROLLBACK 2014-09-22 11:38:05.784 11474 ERROR
   neutron.openstack.common.db.sqlalchemy.session [-] DB exception
   wrapped.
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session Traceback (most recent
   call
   last):
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session   File
   /usr/lib/python2.7/dist-
   packages/neutron/openstack/common/db/sqlalchemy/session.py, line 524,
   in
   _wrap
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session return f(*args,
   **kwargs) 2014-09-22 11:38:05.784 11474 TRACE
   
   From looking up the code, it looks like you are using Havana [1]. The
  
  code in the master branch of Neutron now uses oslo.db, not
  neutron.openstack.common.db, so this issue may have been resolved in
  later versions of Neutron.
 
 Yes, I'm using havana and I have now no possibility to upgrade it fast to
 icehouse (about master branch I even don't want to think :)). Do You want to
 tell me that this problem will be existing in havana and this can't be
 fixed in that release?
 
  [1]
  https://github.com/openstack/neutron/blob/stable/havana/neutron/openstack/
  co mmon/db/sqlalchemy/session.py#L524
  
   neutron.openstack.common.db.sqlalchemy.session   File
   /usr/lib/python2.7/dist-
   packages/neutron/openstack/common/db/sqlalchemy/session.py, line 718,
   in
   flush 2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session return
   super(Session,
   self).flush(*args, **kwargs)
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session   File
   /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
   1818,
   in
   flush
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session self._flush(objects)
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session   File
   /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
   1936,
   in
   _flush
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session
   transaction.rollback(_capture_exception=True)
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session   File
   /usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line
   58, in __exit__
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session
   compat.reraise(exc_type,
   exc_value, exc_tb)
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session   File
   /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
   1900,
   in
   _flush
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session
   flush_context.execute()
   2014-09-22 11:38:05.784 11474 TRACE
   neutron.openstack.common.db.sqlalchemy.session   File
   /usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line
   372, in execute
   2014-09-22 11:38:05.784 11474