Re: [Openstack-operators] [Openstack] [nova] Cleaning up unused images in the cache

2015-04-29 Thread Leslie-Alexandre DENIS

Dear Joe,

Thanks for your kind reply, your informations are helpful. I'm reading 
the imagecache.py[1] sourcecode in order to really understand what it'll 
happen in case of a shared filesystem.


I understand the SHA1 hash mechanism and the backing file check but I'm 
not sure how it will manage the case of shared FS.


The main function seems to be :
- backing_file = libvirt_utils.get_disk_backing_file(disk_path)

But does libvirt_utils.get_disk_backing_file federates all the compute 
nodes informations ?! If no it may delete the other nodes images ?


Hope it's not too redundant,
Kind regards

[1] 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagecache.py


Le 28/04/2015 16:18, Joe Topjian a écrit :

Hello,

I've got a similar question about cache-manager and the presence
of a shared filesystem for instances images.
I'm currently reading the source code in order to find out how
this is managed but before I would be curious how you achieve this
on production servers.

For example images not used by compute node A will probably be
cleaned on the shared FS despite the fact that compute B use it,
that's the main problem.


This used to be a problem, but AFAIK it should not happen any more. If 
you're noticing it happening, please raise a flag.


How do you handle _base guys ?


We configure Nova to not have instances rely on _base files. We found 
it to be too dangerous of a single point of failure. For example, we 
ran into the scenario you described a few years ago before it was 
fixed. Bugs are one thing, but there are a lot of other ways a _base 
file can become corrupt or removed. Even if those scenarios are rare, 
the results are damaging enough for us to totally forgo reliance of 
_base files.


Padraig Brady has an awesome article that details the many ways you 
can configure _base and instance files:


http://www.pixelbeat.org/docs/openstack_libvirt_images/

I'm looping -operators into this thread for input on further ways to 
handle _base. You might also be able to find some other methods by 
searching the -operators mailing list archive.


Thanks,
Joe



--
Leslie-Alexandre DENIS
Tel +33 6 83 88 34 01
Skype ladenis-dc4
BBM PIN 7F78C3BD

SIRET 800 458 663 00013

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ansible Playbook for OpenStack (juno)

2015-07-04 Thread Leslie-Alexandre DENIS

Hello,

The best sources for that would be :
- https://github.com/stackforge/os-ansible-deployment
- https://github.com/blueboxgroup/ursula

Actually these are well written and very modular, you probably don't 
really need all of the complexity for a standard deployment but at least 
you can start from that.


Regards,

Le 05/07/2015 00:40, achi hara a écrit :


Hi everyone,

I would like to deploy OpenStack (*Juno*release) using Ansible.

could you please provide me with the playbooks for this purpose ?

Your assistance is greatly  appreciated.

Sincerely

Hamza



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Cinder LVM volumes and Icehouse->Kilo migration

2015-08-13 Thread Leslie-Alexandre DENIS
Hi guys,

I'm facing a problem with Cinder version openstack-cinder-2014.1.3-1 which 
doesn't let me migrate a volume from one LVM backend to another LVM.
I'm doing this simple task:

- cinder create lad-test-02 20
- cinder migrate lad-test-02 newcinderhost050

The logs show that the process begins, the iSCI target is created and the LV 
too on the new host.
Unfortunately the old target/LV persists, it isn't deleted...

A Cinder list shows:

| 0f70ccb6-85ba-4726-bda8-4051b68e943e | available | lad-test-02 | 20 | None | 
false |
| 239bab9d-a70c-470e-ad13-9100946d8ca7 | available | lad-test-02 | 20 | None | 
false |

tgtd-admin -s and lvs, issued on the old and the new Cinder hosts, demonstrate 
that the two volumes are present and running at this level.

If i try to delete it:

cinder delete 0f70ccb6-85ba-4726-bda8-4051b68e943e
Delete for volume 0f70ccb6-85ba-4726-bda8-4051b68e943e failed: Internal Server 
Error (HTTP 500)
ERROR: Unable to delete any of the specified volumes.

cinder delete 239bab9d-a70c-470e-ad13-9100946d8ca7
Delete for volume 239bab9d-a70c-470e-ad13-9100946d8ca7 failed: Internal Server 
Error (HTTP 500)
ERROR: Unable to delete any of the specified volumes.

The consistency of the two volumes isn't ensured anymore and my two volumes are 
unusable at the OpenStack level.

So, in a word, my point is, how do you manage LVM volumes migration between 
your Cinders and how do you upgrade OpenStack Cinder nodes with used volumes on 
it ?

Any help would be great,
Thanks
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack Tuning Guide

2015-11-04 Thread Leslie-Alexandre DENIS

Hello there,

Very interesting initiative ! I'm currently working on the performance 
side too on our (french astrophysics cloud) OpenStack deployment but to 
say so currently I'm just following what CERN did regarding to Nova/CPU 
configuration.
It's essentially around KSM, NUMA, CPU pinning, EPT and performance 
measures are compared through Spec 06 benchmarks, which is standard for 
HPC/HTC computing.


You can take a look at the very interesting blog from CERN here 
http://openstack-in-production.blogspot.fr/ and if you want I have few 
slides from various meetings like HEPIX on these subjects.


Following the etherpad !

Regards,

Le 05/11/2015 00:11, Donald Talton a écrit :

Awesome start. Rabbit fd tweaks are the bane of every install...including some 
of my own...

-Original Message-
From: Kevin Bringard (kevinbri) [mailto:kevin...@cisco.com]
Sent: Wednesday, November 04, 2015 3:56 PM
To: OpenStack Operators
Subject: [Openstack-operators] OpenStack Tuning Guide

Hey all!

Something that jumped out at me in Tokyo was how much it seemed that "basic" tuning stuff 
wasn't common knowledge. This was especially prevalent in the couple of rabbit talks I went to. So, 
in order to pool our resources, I started an Etherpad titled the "OpenStack Tuning Guide" 
(https://etherpad.openstack.org/p/OpenStack_Tuning_Guide). Eventually I expect this should go into 
the documentation project, and much of it may already exist in the operators manual (or elsewhere), 
but I thought that getting us all together to drop in our hints, tweaks, and best practices for 
tuning our systems to run OpenStack well, in real production, would be time well spent.

It's a work in progress at the moment, and we've only just started, but please 
feel free to check it out. Feedback and community involvement is super welcome, 
so please don't hesitate to modify it as you see fit.

Finally, I hate diverging resources, so if something like this already exists 
please speak up so we can focus our efforts on making sure that's up to date 
and well publicized.

Thanks everyone!

-- Kevin
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Nova instances fails to use more than 1 vCPU with smpboot: do_boot_cpu failed(-1) to wakeup

2015-11-06 Thread Leslie-Alexandre DENIS
Hello openstackers,

I'm in the rush and it would be very valuable to have your inputs, if possible, 
on a core problem I encounter right now. See below.
I just opened an incident here :

Any help would be awesome.

Thanks,
Regards
Description of problem: Freshly migrated from RDO Icehouse to Kilo, 
everything is fine except the guest ability to use more than 1 vCPU. The 
concerned host are on CentOS 7 3.10.0-229.14.1.el7.x86_64 with : - 
libvirt-*1.2.8-16.el7_1.5.x86_64 - qemu-kvm-rhev-1.5.3-86.el7.1.x86_64 - 
openstack-nova 2015.1.1 from RDO We just finished three days of debug without 
results. The downgrade of packages (i.e. Kernel, libvirt or qemu) doesn't 
change anything. The host (i.e compute) is not in power saving mode. The 
OpenStack configuration related to NUMA or CPU is by default and use host-model 
for guests. The guests are official Ubuntu cloud img/CentOS 7 official cloud 
img and in-house CentOS 7 cloud img. The problem doesn't appear with Scientific 
Linux v6, in fact it uses an older Kernel branch (2.6). *Guest relevant dmesg : 
[ 0.00] Linux version 3.10.0-229.7.2.el7.x86_64 
(buil...@kbuilder.dev.centos.org (mailto:buil...@kbuilder.dev.centos.org)) (gcc 
version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Tue Jun 23 22:06:11 UTC 
2015 [ 0.00] found SMP MP-table at [mem 0x000f2000-0x000f200f] mapped at 
[880f2000] [ 0.00] Using ACPI (MADT) for SMP configuration 
information [ 0.00] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.007358] 
Freeing SMP alternatives: 24k freed [ 0.015315] smpboot: CPU0: Intel Xeon 
E312xx (Sandy Bridge) (fam: 06, model: 2a, stepping: 01) [ 0.023974] smpboot: 
Booting Node 0, Processors #1 [ 10.034345] smpboot: do_boot_cpu failed(-1) to 
wakeup CPU#1 [ 20.046181] smpboot: do_boot_cpu failed(-1) to wakeup CPU#2 [ 
30.058121] smpboot: do_boot_cpu failed(-1) to wakeup CPU#3 [ 30.059031] 
smpboot: Total of 1 processors activated (4799.99 BogoMIPS) [ 40.383877] 
smpboot: Booting Node 0 Processor 1 APIC 0x1 [ 50.394081] smpboot: do_boot_cpu 
failed(-1) to wakeup CPU#1 [ 50.480121] smpboot: Booting Node 0 Processor 3 
APIC 0x3 [ 60.491125] smpboot: do_boot_cpu failed(-1) to wakeup CPU#3 [ 
60.504397] smpboot: Booting Node 0 Processor 2 APIC 0x2 [ 70.515035] smpboot: 
do_boot_cpu failed(-1) to wakeup CPU#2 Version-Release number of selected 
component (if applicable): - Nova host CPU Model: 63 Model name: Intel(R) 
Xeon(R) CPU E5-2630 v3 @ 2.40GHz - Guest exposed CPU Model: 42 Model name: 
Intel Xeon E312xx (Sandy Bridge) - CentOS 7 3.10.0-229.14.1.el7.x86_64 - 
libvirt-*1.2.8-16.el7_1.5.x86_64 - qemu-kvm-rhev-1.5.3-86.el7.1.x86_64 - 
openstack-nova 2015.1.1 from RDO How reproducible: Every time. Steps to 
Reproduce: 1. Boot a guest with a flavor containing more than 1 vCPU Additional 
info: Qemu CPU instructions for running guests : -cpu 
SandyBridge,+invpcid,+erms,+bmi2,+smep,+avx2,+bmi1,+fsgsbase,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+movbe,+dca,+pcid,+pdcm,+xtpr,+fma,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Kilo upgrade challenges

2015-11-11 Thread Leslie-Alexandre DENIS

Le 11/11/2015 05:46, Xav Paice a écrit :

Hi,

Late to the party, I'm only just doing the Kilo upgrade now (with a 
couple of projects going direct to Liberty).  I seem to have hit a bit 
of a snag, and I've now spent a bit too long banging my head against 
this, was wondering if anyone else has advice/experiences to share.


If it's a "you Muppet, you did X wrong" thing, I'd love to hear about 
it - I'm 99.9% sure I've stuffed up a config somewhere.


In short, after upgrading, say, Heat, to Kilo, and running the db 
migration, restarting etc, the CLI is returning 'Authentication 
required'.  My user is admin, and nothing has changed that I'm aware 
of.  I can't see anything particularly new in the logs for keystone, 
nor in heat, except that I now see "WARNING 
keystonemiddleware.auth_token [-] Authorization failed for token".  
I'm not sure if that's a problem or not though.


Some details etc are in http://paste.openstack.org/show/478501/ -> 
from a dev environment so not even sanitized.


Anyone been there?

Thanks
Xav


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Hello Xav,

I faced similar problems last few weeks, during an Icehouse to Kilo 
upgrade, regarding to Cinder and I found out that it was due to client 
version.
You can find the correct version in 
https://github.com/openstack/heat/blob/stable/kilo/requirements.txt

At least, you can double check it and eventually solve this.

My 2 cents,
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] CentOS 7 KVM and QEMU 2.+

2015-11-12 Thread Leslie-Alexandre DENIS
Hello guys,

I'm struggling at finding a qemu(-kvm) version up-to-date for CentOS 7 with 
official repositories
and additional EPEL.

Currently the only package named qemu-kvm in these repositories is 
*qemu-kvm-1.5.3-86.el7_1.8.x86_64*, which is a bit outdated.

As what I understand QEMU merged the forked qemu-kvm into the base code since 
1.3 and the Kernel is shipped with KVM module. Theoretically we can just 
install qemu 2.+ and load KVM in order to use nova-compute with KVM 
acceleration, right ?

The problem is that the packages openstack-nova{-compute} have a dependencies 
with qemu-kvm. For example Fedora ships qemu-kvm as a subpackage of qemu and it 
appears to be the same in fact, not the forked project [1].



In a word, guys how do you manage to have a QEMU v2.+ with latest libvirt on 
your CentOS computes nodes ?
Is somebody using the qemu packages from oVirt ? [2]


Thanks,
See you


---

[1] https://apps.fedoraproject.org/packages/qemu-kvm
[2] http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/x86_64/

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] CentOS 7 KVM and QEMU 2.+

2015-11-12 Thread Leslie-Alexandre DENIS

Le 12/11/2015 18:26, Erik McCormick a écrit :

I've been building these and running them on CentOS for a while,
mainly to get RBD support. They work fine.

http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/


On Thu, Nov 12, 2015 at 12:18 PM, Arne Wiebalck  wrote:

Hi,

What about the CentOS Virt SIG’s repo at

http://mirror.centos.org/centos-7/7/virt/x86_64/kvm-common/

(and the testing repos at:
http://buildlogs.centos.org/centos/7/virt/x86_64/kvm-common/ )?

These contain newer versions of the qemu-* packages.

Cheers,
  Arne

—
Arne Wiebalck
CERN IT




On 12 Nov 2015, at 17:54, Leslie-Alexandre DENIS  wrote:

Hello guys,

I'm struggling at finding a qemu(-kvm) version up-to-date for CentOS 7 with 
official repositories
and additional EPEL.

Currently the only package named qemu-kvm in these repositories is 
*qemu-kvm-1.5.3-86.el7_1.8.x86_64*, which is a bit outdated.

As what I understand QEMU merged the forked qemu-kvm into the base code since 
1.3 and the Kernel is shipped with KVM module. Theoretically we can just 
install qemu 2.+ and load KVM in order to use nova-compute with KVM 
acceleration, right ?

The problem is that the packages openstack-nova{-compute} have a dependencies 
with qemu-kvm. For example Fedora ships qemu-kvm as a subpackage of qemu and it 
appears to be the same in fact, not the forked project [1].



In a word, guys how do you manage to have a QEMU v2.+ with latest libvirt on 
your CentOS computes nodes ?
Is somebody using the qemu packages from oVirt ? [2]


Thanks,
See you


---

[1] https://apps.fedoraproject.org/packages/qemu-kvm
[2] http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/x86_64/

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Thanks everybody for your great inputs, I'll consider the 3 options for 
our platform.


Thanks,
See you

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [cinder] About how to make the vm use volume with libiscsi

2016-02-22 Thread Leslie-Alexandre DENIS

Hello Xiao,

I tried this exact same thing and it doesn't work for me neither on RDO 
Kilo 2015.1.2 with a recent QEMU version 2.3 supporting iscsi.


As it's defined here 
https://github.com/openstack/nova/blob/stable/kilo/nova/virt/libvirt/driver.py#L273 
it should be enough to change the drivers in the volume_drivers tab...


I tried the LibvirtNetVolumeDriver inside and outside of nova/[libvirt] 
configuration section. There wasn't anything inside the debug logs too...



Regards,




Le 16/02/2016 13:54, Xiao Ma (xima2) a écrit :

Hi, All

I want to make the qemu communicate with iscsi target using libiscsi
directly, and I
followed https://review.openstack.org/#/c/135854/ to add
'volume_drivers =
iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’ in nova.conf
  and then restarted nova services and cinder services, but still the
volume configuration of vm is as bellow:

 
   
   
   
   076bb429-67fd-4c0c-9ddf-0dc7621a975a
   
 


I use centos7 and Liberty version of OpenStack.
Could anybody tell me how can I achieve it?


Thanks.





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Docker Engine inside KVM and Kernel sk_offloading crash

2016-05-20 Thread Leslie-Alexandre DENIS
Hello,

pretty hard problem today for me, as I need to build a Docker platform test 
inside OpenStack with CentOS 7.2 all around and apparently the Ethernet 
offloading features don't like virtio_net at all !
Not even one HTTP packet is going in/out through Docker bridges and host's NIC, 
the Kernel stack trace refers to WARNING: at net/core/dev.c:2263 
skb_warn_bad_offload+0xcd/0xda().
I detect some bad TCP checksum in the VM for HTTP traffic but it must be 
related to offloading, which is normally disabled...
On the contrary a second test with FTP traffic seems to work.

I disabled all the interfaces offloading features via ethtool -K, more 
precisely rx off tx off tso off lro off gro off gso off. It does nothing 
regarding to the Docker problem.

The only way to run a container is to pass the --net host argument in order to 
not use the bridge networking, and it works.

So my question is, does anybody is using nested Docker Engine inside KVM ? Any 
hint ?

*facts*

* VM net args >> -netdev tap,fd=48,id=hostnet0,vhost=on,vhostfd=52 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:c4:2a:7e,bus=pci.0,addr=0x3
* VM qemu-kvm-ev-2.3.0-31.el7_2.4.1
* VM CentOS 7.2.1511 with 3.10.0-327.18.2.el7.x86_64 Kernel
* Docker version 1.11.1, build 5604cbe
* Docker bridge options (defaults):
 "com.docker.network.bridge.default_bridge": "true",
 "com.docker.network.bridge.enable_icc": "true",
 "com.docker.network.bridge.enable_ip_masquerade": "true",
 "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
 "com.docker.network.bridge.name": "docker0",
 "com.docker.network.driver.mtu": "1500"
* Kernel TCP/IP params ok
* Offloading turned off via ethtool -K ... rx off tx off tso off lro off gro 
off gso off
* Kernel stack trace >

May 20 10:54:08 ccosvms0094 kernel: Call Trace:
May 20 10:54:08 ccosvms0094 kernel:  [] dump_stack+0x19/0x1b
May 20 10:54:08 ccosvms0094 kernel: [] warn_slowpath_common+0x70/0xb0
May 20 10:54:08 ccosvms0094 kernel: [] warn_slowpath_fmt+0x5c/0x80
May 20 10:54:08 ccosvms0094 kernel: [] ? ___ratelimit+0x93/0x100
May 20 10:54:08 ccosvms0094 kernel: [] skb_warn_bad_offload+0xcd/0xda
May 20 10:54:08 ccosvms0094 kernel: [] __skb_gso_segment+0x79/0xb0
May 20 10:54:08 ccosvms0094 kernel: [] validate_xmit_skb.part.86+0x135/0x2f0
May 20 10:54:08 ccosvms0094 kernel: [] dev_queue_xmit+0x4dd/0x570
May 20 10:54:08 ccosvms0094 kernel: [] ? ipv4_confirm+0x86/0x100 
[nf_conntrack_ipv4]
May 20 10:54:08 ccosvms0094 kernel: [] ip_finish_output+0x53d/0x7d0
May 20 10:54:08 ccosvms0094 kernel: [] ip_output+0x6f/0xe0
May 20 10:54:08 ccosvms0094 kernel: [] ? ip_fragment+0x8b0/0x8b0
May 20 10:54:08 ccosvms0094 kernel: [] ip_forward_finish+0x66/0x80
May 20 10:54:08 ccosvms0094 kernel: [] ip_forward+0x377/0x490
May 20 10:54:08 ccosvms0094 kernel: [] ? ip_frag_mem+0x40/0x40
May 20 10:54:08 ccosvms0094 kernel: [] ip_rcv_finish+0x7d/0x350
May 20 10:54:08 ccosvms0094 kernel: [] ip_rcv+0x2b6/0x410
May 20 10:54:08 ccosvms0094 kernel: [] ? inet_del_offload+0x40/0x40
May 20 10:54:08 ccosvms0094 kernel: [] __netif_receive_skb_core+0x582/0x7d0
May 20 10:54:08 ccosvms0094 kernel: [] __netif_receive_skb+0x18/0x60
May 20 10:54:08 ccosvms0094 kernel: [] netif_receive_skb+0x40/0xc0
May 20 10:54:08 ccosvms0094 kernel: [] virtnet_poll+0x3d8/0x700 [virtio_net]
May 20 10:54:08 ccosvms0094 kernel: [] net_rx_action+0x152/0x240
May 20 10:54:08 ccosvms0094 kernel: [] __do_softirq+0xef/0x280
May 20 10:54:08 ccosvms0094 kernel: [] call_softirq+0x1c/0x30
May 20 10:54:08 ccosvms0094 kernel: [] do_softirq+0x65/0xa0
May 20 10:54:08 ccosvms0094 kernel: [] irq_exit+0x115/0x120
May 20 10:54:08 ccosvms0094 kernel: [] do_IRQ+0x58/0xf0
May 20 10:54:08 ccosvms0094 kernel: [] common_interrupt+0x6d/0x6d
May 20 10:54:08 ccosvms0094 kernel:  [] ? native_safe_halt+0x6/0x10
May 20 10:54:08 ccosvms0094 kernel: [] default_idle+0x1f/0xc0
May 20 10:54:08 ccosvms0094 kernel: [] arch_cpu_idle+0x26/0x30
May 20 10:54:08 ccosvms0094 kernel: [] cpu_startup_entry+0x245/0x290
May 20 10:54:08 ccosvms0094 kernel: [] start_secondary+0x1ba/0x230
May 20 10:54:08 ccosvms0094 kernel: ---[ end trace b2c6008796d4cce2 ]---
Thanks a lot,
Leslie
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Docker Engine inside KVM and Kernel sk_offloading crash

2016-05-23 Thread Leslie-Alexandre DENIS
Hello everybody,

after a lot of investigations, I finally get rid of that Kernel 
skb_warn_bad_offload crash. To do so you want to disable completely GSO and 
even CSUM from the virtio_net module by adding [1] into your preferred 
modprobe.d file.

You probably want to know that ethtool -K ethernet_int tso off gso off whatever 
off isn't sufficient and doesn't override the default of virtio_net module, 
which is GSO and CSUM on.

Hope it helps someone, somewhere :)

See you,
Regards

[1] options virtio_net gso=0 and csum=0
May 20 2016 11:06 AM, "Leslie-Alexandre DENIS"  wrote:

Hello,

pretty hard problem today for me, as I need to build a Docker platform test 
inside OpenStack with CentOS 7.2 all around and apparently the Ethernet 
offloading features don't like virtio_net at all !
Not even one HTTP packet is going in/out through Docker bridges and host's NIC, 
the Kernel stack trace refers to WARNING: at net/core/dev.c:2263 
skb_warn_bad_offload+0xcd/0xda().
I detect some bad TCP checksum in the VM for HTTP traffic but it must be 
related to offloading, which is normally disabled...
On the contrary a second test with FTP traffic seems to work.

I disabled all the interfaces offloading features via ethtool -K, more 
precisely rx off tx off tso off lro off gro off gso off. It does nothing 
regarding to the Docker problem.

The only way to run a container is to pass the --net host argument in order to 
not use the bridge networking, and it works.

So my question is, does anybody is using nested Docker Engine inside KVM ? Any 
hint ?

*facts*

* VM net args >> -netdev tap,fd=48,id=hostnet0,vhost=on,vhostfd=52 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:c4:2a:7e,bus=pci.0,addr=0x3
* VM qemu-kvm-ev-2.3.0-31.el7_2.4.1
* VM CentOS 7.2.1511 with 3.10.0-327.18.2.el7.x86_64 Kernel
* Docker version 1.11.1, build 5604cbe
* Docker bridge options (defaults):
 "com.docker.network.bridge.default_bridge": "true",
 "com.docker.network.bridge.enable_icc": "true",
 "com.docker.network.bridge.enable_ip_masquerade": "true",
 "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
 "com.docker.network.bridge.name": "docker0",
 "com.docker.network.driver.mtu": "1500"
* Kernel TCP/IP params ok
* Offloading turned off via ethtool -K ... rx off tx off tso off lro off gro 
off gso off
* Kernel stack trace >

May 20 10:54:08 ccosvms0094 kernel: Call Trace:
May 20 10:54:08 ccosvms0094 kernel:  [] dump_stack+0x19/0x1b
May 20 10:54:08 ccosvms0094 kernel: [] warn_slowpath_common+0x70/0xb0
May 20 10:54:08 ccosvms0094 kernel: [] warn_slowpath_fmt+0x5c/0x80
May 20 10:54:08 ccosvms0094 kernel: [] ? ___ratelimit+0x93/0x100
May 20 10:54:08 ccosvms0094 kernel: [] skb_warn_bad_offload+0xcd/0xda
May 20 10:54:08 ccosvms0094 kernel: [] __skb_gso_segment+0x79/0xb0
May 20 10:54:08 ccosvms0094 kernel: [] validate_xmit_skb.part.86+0x135/0x2f0
May 20 10:54:08 ccosvms0094 kernel: [] dev_queue_xmit+0x4dd/0x570
May 20 10:54:08 ccosvms0094 kernel: [] ? ipv4_confirm+0x86/0x100 
[nf_conntrack_ipv4]
May 20 10:54:08 ccosvms0094 kernel: [] ip_finish_output+0x53d/0x7d0
May 20 10:54:08 ccosvms0094 kernel: [] ip_output+0x6f/0xe0
May 20 10:54:08 ccosvms0094 kernel: [] ? ip_fragment+0x8b0/0x8b0
May 20 10:54:08 ccosvms0094 kernel: [] ip_forward_finish+0x66/0x80
May 20 10:54:08 ccosvms0094 kernel: [] ip_forward+0x377/0x490
May 20 10:54:08 ccosvms0094 kernel: [] ? ip_frag_mem+0x40/0x40
May 20 10:54:08 ccosvms0094 kernel: [] ip_rcv_finish+0x7d/0x350
May 20 10:54:08 ccosvms0094 kernel: [] ip_rcv+0x2b6/0x410
May 20 10:54:08 ccosvms0094 kernel: [] ? inet_del_offload+0x40/0x40
May 20 10:54:08 ccosvms0094 kernel: [] __netif_receive_skb_core+0x582/0x7d0
May 20 10:54:08 ccosvms0094 kernel: [] __netif_receive_skb+0x18/0x60
May 20 10:54:08 ccosvms0094 kernel: [] netif_receive_skb+0x40/0xc0
May 20 10:54:08 ccosvms0094 kernel: [] virtnet_poll+0x3d8/0x700 [virtio_net]
May 20 10:54:08 ccosvms0094 kernel: [] net_rx_action+0x152/0x240
May 20 10:54:08 ccosvms0094 kernel: [] __do_softirq+0xef/0x280
May 20 10:54:08 ccosvms0094 kernel: [] call_softirq+0x1c/0x30
May 20 10:54:08 ccosvms0094 kernel: [] do_softirq+0x65/0xa0
May 20 10:54:08 ccosvms0094 kernel: [] irq_exit+0x115/0x120
May 20 10:54:08 ccosvms0094 kernel: [] do_IRQ+0x58/0xf0
May 20 10:54:08 ccosvms0094 kernel: [] common_interrupt+0x6d/0x6d
May 20 10:54:08 ccosvms0094 kernel:  [] ? native_safe_halt+0x6/0x10
May 20 10:54:08 ccosvms0094 kernel: [] default_idle+0x1f/0xc0
May 20 10:54:08 ccosvms0094 kernel: [] arch_cpu_idle+0x26/0x30
May 20 10:54:08 ccosvms0094 kernel: [] cpu_startup_entry+0x245/0x290
May 20 10:54:08 ccosvms0094 kernel: [] start_secondary+0x1ba/0x230
May 20 10:54:08 ccosvms0094 kernel: ---[ end trace b2c6008796d4cce2 ]---
Thanks a lot,
Leslie
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cleanup of /var/log/libvirt/qemu/

2016-05-25 Thread Leslie-Alexandre DENIS
Le 21/01/2016 à 18:02, Arne Wiebalck a écrit :
> Dear all, 
> 
> On compute nodes with high instance turn-over we’re accumulating quite
> some log files in /var/log/libvirt/qemu/.
> https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1460197 is
> mentioning this issue as well.
> 
> While removing log files should probably done by libvirt/qemu on
> instance deletion, having a stand-alone script
> interfacing with libvirt (which is then invoked by cron) or patching the
> nova liibvirt driver to delete log files as part
> of the instance clean-up is what we’re currently thinking of as options.
> 
> Before going any further, though, I wanted to quickly reach out if
> someone already has a good way of dealing with
> these.
> 
> Thanks!
>  Arne
> 
> --
> Arne Wiebalck
> CERN IT

Hello there,

I'm facing the same problematic on our deployment and, having quite a
lot of ancient QEMU logs, disturbs the reading or at least makes things
harder.

I would love to hear how you deal with it and suggest that eventually
the Nova delete should trigger the deletion.

Thanks,
Regards




signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Tuning segment offloading libvirt capability

2016-06-15 Thread Leslie-Alexandre DENIS
Hello,

Consequently to some network problems with the Intel 10Gbps (10G 2P X520) and 
the ixgbe driver, I
needed to disable some segment offloading features (LRO) of these cards. In the 
process I reviewed
some pieces of code and it appears that libvirt is able to do the same and let 
the options be
configurable into the domain format.

The feature description is self-explanatory:

 Add options for tuning segment offloading:
 
 
 
 
 which control the respective host_ and guest_ properties
 of the virtio-net device.
 
The related libvirt commit is 
https://www.redhat.com/archives/libvir-list/2014-September/msg01159.html

Is it planned to add these options into the flavor or image properties, like 
for example the NUMA
topology ?
It would add some extra control for operators at a minimal cost.

Thanks,
Regards

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to configure keystone to use SSL

2016-09-22 Thread Leslie-Alexandre DENIS

Hints to start with:

* https://mozilla.github.io/server-side-tls/ssl-config-generator/
* https://www.ssllabs.com/ssltest/
* https://raymii.org/s/tutorials/Strong_SSL_Security_On_Apache2.html

You definitely need to setup the WSGI as, yes, the eventlet is 
deprecated. Enjoy your TLS setup :)


Bye.

On 22/09/2016 15:16, Mohammed Naser wrote:

I'm fairly sure the parameters under [ssl] are only for using the
deprecated eventlet server.  You'll need to add your SSL configuration
to the Apache VirtualHost in order to be able to get access to SSL

Good luck!

On Wed, Sep 21, 2016 at 11:14 PM, zhangjian
 wrote:

Hi, all


I have a mitaka environment created by packstack, and i tried to 
configure

the keystone to use ssl, but failed, can anyone help me?
# keystone is a wsgi service now.


Configure steps are as following:
===
# keystone-manage ssl_setup --keystone-user keystone --keystone-group
keystone
# chown -R keystone:keystone /etc/keystone/ssl
# keystone endpoint-create --service keystone --region RegionOne 
--publicurl
https://{FQDN}:5000/v2.0 --internalurl https://{FQDN}:5000/v2.0 
--adminurl

https://{FQDN}:35357/v2.0
# cat /etc/keystone/keystone.conf
  ... ...
  [ssl]
  enable=True
  certfile = /etc/keystone/ssl/certs/keystone.pem
  keyfile = /etc/keystone/ssl/private/keystonekey.pem
  ca_certs = /etc/keystone/ssl/certs/ca.pem
  ca_key = /etc/keystone/ssl/private/cakey.pem

# cat keystonerc_admin
... ...
export OS_AUTH_URL=https://FQDN:5000/v2.0


# keystone endpoint-delete Old_Endpoint_For_Keystone
Unable to delete endpoint.


# systemctl restart httpd
# source keystonerc_admin

# openstack project list
Discovering versions from the identity service failed when creating 
the

password plugin. Attempting to determine version from URL.
SSL exception connecting to https://FQDN:5000/v2.0/tokens: [SSL:
UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:765)
===

Regards,
Kenn

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators