Re: [Openstack] Some questions about "Cinder Multi-Attach" in Openstack Queens

2018-03-06 Thread Arne Wiebalck
Multi-attach will not allow you to have a block device with a local file-system 
to be concurrently
accessed from multiple nodes. It is intended for HA scenarios where a second 
server can take
over a block device from another server. So, if you unmount on your first 
server, you can mount
on the second and you will see your file.

Arne

On 06 Mar 2018, at 09:53, 谭 明宵 
<tanmingx...@outlook.com<mailto:tanmingx...@outlook.com>> wrote:

I  installed the openstack queens use devstack. I want to test the "Cinder 
Multi-Attach" function

1. create a  multiattach volume
```
# cinder type-create multiattach
# cinder type-key multiattach set multiattach=" True"
#  cinder create 10 --name multiattach-volume --volume-type 
```
2. attache the volume to two instances
```
# nova volume-attach test01 
# nova volume-attach test02 
```
<7B194ED4-5D18-4FFA-9FF3-E54DB425E7E4.png>
3. mount the volume , create some file,but the file don't sync between the two 
instance,It seems that they are two independent volumes
<4DFCCC80-5132-4383-B986-726664E45EAF.png>

then test02 create a file,but i cannot find it in test01,The reverse is the 
same.



I think i have something wrong,the test like the "share storage"
What should the correct effect be like? thanks

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Pike: cinder-volume causes high CPU load (Ceph backend)

2017-11-17 Thread Arne Wiebalck
Hi Matthias,

This may be https://bugs.launchpad.net/cinder/+bug/1704106.

In our deployment it was necessary to disable the the gathering of
the provisioning stats and we simply commented it out:

#self._get_usage_info()

in def _update_volume_stats(self) from cinder/volume/drivers/rbd.py


HTH,
 Arne


> On 17 Nov 2017, at 13:55, Matthias Leopold 
> <matthias.leop...@meduniwien.ac.at> wrote:
> 
> Hi,
> 
> we are running a Cinder Pike instance with Ceph (luminous) backend for use 
> with oVirt (not openstack). We are experiencing the "cinder-volume causes 
> high CPU load" issue. In Cinder 11.0.0 we could successfully apply the fix 
> mentioned in 
> https://ask.openstack.org/en/question/110709/ocata-cinder-volume-causes-high-cpu-load-ceph-backend/.
>  In Cinder 11.0.1 
> /usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py changed, but 
> the high CPU load problem reappeared. I tried changing 
> "report_dynamic_total_capacity" (mentioned in the changelog), but it didn't 
> make a difference. Has anyone seen/fixed this?
> 
> thx
> matthias
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Guest crash and KVM unhandled rdmsr

2017-10-18 Thread Arne Wiebalck
Blair,

We’ve seen these errors in our deployment as well, on CentOS 7.3 with 3.10 
kernels, when looking into instance
issues. So far we’ve always discarded them as not relevant to the problems 
observed, so I’d be very interested if
it turns out that these should better not be ignored.

Cheers,
 Arne


On 17 Oct 2017, at 18:52, George Mihaiescu 
<lmihaie...@gmail.com<mailto:lmihaie...@gmail.com>> wrote:

Hi Blair,

We had a few cases of compute nodes hanging with the last log in syslog being 
related to "rdmsr", and requiring hard reboots:
 kvm [29216]: vcpu0 unhandled rdmsr: 0x345

The workloads are probably similar to yours (SGE workers doing genomics) with 
CPU mode host-passthrough, on top of Ubuntu 16.04 and kernel 4.4.0-96-generic.

I'm not sure the "rdmsr" logs are relevant though, because we see them on other 
 compute nodes that have no issues.

Did you find anything that might indicate what the root cause is?

Cheers,
George


On Thu, Oct 12, 2017 at 5:26 PM, Blair Bethwaite 
<blair.bethwa...@gmail.com<mailto:blair.bethwa...@gmail.com>> wrote:
Hi all,

Has anyone seen guest crashes/freezes associated with KVM unhandled rdmsr 
messages in dmesg on the hypervisor?

We have seen these messages before but never with a strong correlation to guest 
problems. However over the past couple of weeks this is happening almost daily 
with consistent correlation for a set of hosts dedicated to a particular HPC 
workload. So far as I know the workload has not changed, but we have just 
recently moved the hypervisors to Ubuntu Xenial (though they were already on 
the Xenial kernel previously) and done minor guest (CentOS7) updates. CPU mode 
is host-passthrough. Currently trying to figure out if the CPU flags in the 
guest have changed since the host upgrade...

Cheers,

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-10-12 Thread Arne Wiebalck
As requested in the feedback, a blueprint has been prepared and filed here:

https://blueprints.launchpad.net/horizon/+spec/quotas-per-cinder-volume-type

We’ll look into uploading our patch.

Cheers,
 Arne



> On 12 Oct 2017, at 13:27, Saverio Proto <ziopr...@gmail.com> wrote:
> 
> So we had a bad feedback on the bug:
> https://bugs.launchpad.net/horizon/+bug/1717342
> 
> Do we have anything pushed to Gerrit that we can at least use carrying
> a local patch ?
> 
> thanks
> 
> Saverio
> 
> 
> 2017-09-26 20:21 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com>:
>> ok, I will
>> 
>> 2017-09-26 14:43 GMT+02:00 Arne Wiebalck <arne.wieba...@cern.ch>:
>>> 
>>> Massimo,
>>> 
>>> Following Rob’s comment on
>>> https://bugs.launchpad.net/horizon/+bug/1717342, would you
>>> be willing to write up a blueprint? Mateusz would then prepare our code
>>> and submit it to
>>> gerrit as a partial implementation (as we only have the user facing part,
>>> not the admin panel).
>>> 
>>> Cheers,
>>> Arne
>>> 
>>> 
>>> On 25 Sep 2017, at 10:46, Arne Wiebalck <arne.wieba...@cern.ch> wrote:
>>> 
>>> Ah, nice, wasn’t aware. Mateusz is one of the Horizon experts here at CERN
>>> I was referring to :)
>>> 
>>> On 25 Sep 2017, at 10:41, Massimo Sgaravatto
>>> <massimo.sgarava...@gmail.com> wrote:
>>> 
>>> Just found that there is already this one:
>>> 
>>> https://bugs.launchpad.net/horizon/+bug/1717342
>>> 
>>> 2017-09-25 10:28 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
>>>> 
>>>> Yes I am interested. Are you going to push them to gerrit ?
>>>> Should we open a bug to track this change into Horizon ?
>>>> 
>>>> massimo do you want to open the bug on Launchpad ? So if Arne pushed
>>>> the patches on gerrit we can link them to the bug. I pointed
>>>> robcresswell to this thread, he is reading us.
>>>> 
>>>> thanks !
>>>> 
>>>> Saverio
>>>> 
>>>> 2017-09-25 10:13 GMT+02:00 Arne Wiebalck <arne.wieba...@cern.ch>:
>>>>> Massimo, Saverio,
>>>>> 
>>>>> We faced the same issue and have created patches for Horizon to display
>>>>> - the per volume quota in the volume request panel, and also
>>>>> - additional information about the volume type (like IOPS and
>>>>> throughput limits, intended usage etc.)
>>>>> 
>>>>> The patches will need some polishing before being sent upstream (I’ll
>>>>> need
>>>>> need to cross-check with our Horizon experts), but we use them in prod
>>>>> since
>>>>> quite a while and are happy to already share patch files if you’re
>>>>> interested.
>>>>> 
>>>>> Cheers,
>>>>> Arne
>>>>> 
>>>>> 
>>>>> 
>>>>>> On 25 Sep 2017, at 09:58, Saverio Proto <ziopr...@gmail.com> wrote:
>>>>>> 
>>>>>> I am pinging on IRC robcresswell from the Horizon project. He is still
>>>>>> PTL I think.
>>>>>> If you are on IRC please join #openstack-horizon.
>>>>>> 
>>>>>> We should ask the Horizon PTL how to get this feature request into
>>>>>> implementation.
>>>>>> 
>>>>>> With the command line interface, can you already see the two different
>>>>>> quotas for the two different volume types ? Can you paste an example
>>>>>> output from the CLI ?
>>>>>> 
>>>>>> thank you
>>>>>> 
>>>>>> Saverio
>>>>>> 
>>>>>> 
>>>>>> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto
>>>>>> <massimo.sgarava...@gmail.com>:
>>>>>>> We are currently running Mitaka (preparing to update to Ocata). I see
>>>>>>> the
>>>>>>> same behavior on an Ocata based testbed
>>>>>>> 
>>>>>>> Thanks, Massimo
>>>>>>> 
>>>>>>> 2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
>>>>>>>> 
>>>>>>>> Hello Massimo,
>>>>>>>> 
>>>>>>>> what is your 

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-26 Thread Arne Wiebalck
Massimo,

Following Rob’s comment on https://bugs.launchpad.net/horizon/+bug/1717342 
<https://bugs.launchpad.net/horizon/+bug/1717342>, would you
be willing to write up a blueprint? Mateusz would then prepare our code and 
submit it to
gerrit as a partial implementation (as we only have the user facing part, not 
the admin panel).

Cheers,
 Arne


> On 25 Sep 2017, at 10:46, Arne Wiebalck <arne.wieba...@cern.ch> wrote:
> 
> Ah, nice, wasn’t aware. Mateusz is one of the Horizon experts here at CERN I 
> was referring to :)
> 
>> On 25 Sep 2017, at 10:41, Massimo Sgaravatto <massimo.sgarava...@gmail.com 
>> <mailto:massimo.sgarava...@gmail.com>> wrote:
>> 
>> Just found that there is already this one:
>> 
>> https://bugs.launchpad.net/horizon/+bug/1717342 
>> <https://bugs.launchpad.net/horizon/+bug/1717342>
>> 
>> 2017-09-25 10:28 GMT+02:00 Saverio Proto <ziopr...@gmail.com 
>> <mailto:ziopr...@gmail.com>>:
>> Yes I am interested. Are you going to push them to gerrit ?
>> Should we open a bug to track this change into Horizon ?
>> 
>> massimo do you want to open the bug on Launchpad ? So if Arne pushed
>> the patches on gerrit we can link them to the bug. I pointed
>> robcresswell to this thread, he is reading us.
>> 
>> thanks !
>> 
>> Saverio
>> 
>> 2017-09-25 10:13 GMT+02:00 Arne Wiebalck <arne.wieba...@cern.ch 
>> <mailto:arne.wieba...@cern.ch>>:
>> > Massimo, Saverio,
>> >
>> > We faced the same issue and have created patches for Horizon to display
>> > - the per volume quota in the volume request panel, and also
>> > - additional information about the volume type (like IOPS and throughput 
>> > limits, intended usage etc.)
>> >
>> > The patches will need some polishing before being sent upstream (I’ll need
>> > need to cross-check with our Horizon experts), but we use them in prod 
>> > since
>> > quite a while and are happy to already share patch files if you’re 
>> > interested.
>> >
>> > Cheers,
>> >  Arne
>> >
>> >
>> >
>> >> On 25 Sep 2017, at 09:58, Saverio Proto <ziopr...@gmail.com 
>> >> <mailto:ziopr...@gmail.com>> wrote:
>> >>
>> >> I am pinging on IRC robcresswell from the Horizon project. He is still
>> >> PTL I think.
>> >> If you are on IRC please join #openstack-horizon.
>> >>
>> >> We should ask the Horizon PTL how to get this feature request into
>> >> implementation.
>> >>
>> >> With the command line interface, can you already see the two different
>> >> quotas for the two different volume types ? Can you paste an example
>> >> output from the CLI ?
>> >>
>> >> thank you
>> >>
>> >> Saverio
>> >>
>> >>
>> >> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto 
>> >> <massimo.sgarava...@gmail.com <mailto:massimo.sgarava...@gmail.com>>:
>> >>> We are currently running Mitaka (preparing to update to Ocata). I see the
>> >>> same behavior on an Ocata based testbed
>> >>>
>> >>> Thanks, Massimo
>> >>>
>> >>> 2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com 
>> >>> <mailto:ziopr...@gmail.com>>:
>> >>>>
>> >>>> Hello Massimo,
>> >>>>
>> >>>> what is your version of Openstack ??
>> >>>>
>> >>>> thank you
>> >>>>
>> >>>> Saverio
>> >>>>
>> >>>> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto
>> >>>> <massimo.sgarava...@gmail.com <mailto:massimo.sgarava...@gmail.com>>:
>> >>>>> Hi
>> >>>>>
>> >>>>>
>> >>>>> In our OpenStack cloud we have two backends for Cinder (exposed using
>> >>>>> two
>> >>>>> volume types), and we set different quotas for these two volume types.
>> >>>>>
>> >>>>> The problem happens when a user, using the dashboard, tries to create a
>> >>>>> volume using a volume type for which the project quota is over:
>> >>>>>
>> >>>>> - the reported error message simply reports "unable to create volume",
>> >>>>> without mentioning that the problem i

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Arne Wiebalck
Ah, nice, wasn’t aware. Mateusz is one of the Horizon experts here at CERN I 
was referring to :)

On 25 Sep 2017, at 10:41, Massimo Sgaravatto 
<massimo.sgarava...@gmail.com<mailto:massimo.sgarava...@gmail.com>> wrote:

Just found that there is already this one:

https://bugs.launchpad.net/horizon/+bug/1717342

2017-09-25 10:28 GMT+02:00 Saverio Proto 
<ziopr...@gmail.com<mailto:ziopr...@gmail.com>>:
Yes I am interested. Are you going to push them to gerrit ?
Should we open a bug to track this change into Horizon ?

massimo do you want to open the bug on Launchpad ? So if Arne pushed
the patches on gerrit we can link them to the bug. I pointed
robcresswell to this thread, he is reading us.

thanks !

Saverio

2017-09-25 10:13 GMT+02:00 Arne Wiebalck 
<arne.wieba...@cern.ch<mailto:arne.wieba...@cern.ch>>:
> Massimo, Saverio,
>
> We faced the same issue and have created patches for Horizon to display
> - the per volume quota in the volume request panel, and also
> - additional information about the volume type (like IOPS and throughput 
> limits, intended usage etc.)
>
> The patches will need some polishing before being sent upstream (I’ll need
> need to cross-check with our Horizon experts), but we use them in prod since
> quite a while and are happy to already share patch files if you’re interested.
>
> Cheers,
>  Arne
>
>
>
>> On 25 Sep 2017, at 09:58, Saverio Proto 
>> <ziopr...@gmail.com<mailto:ziopr...@gmail.com>> wrote:
>>
>> I am pinging on IRC robcresswell from the Horizon project. He is still
>> PTL I think.
>> If you are on IRC please join #openstack-horizon.
>>
>> We should ask the Horizon PTL how to get this feature request into
>> implementation.
>>
>> With the command line interface, can you already see the two different
>> quotas for the two different volume types ? Can you paste an example
>> output from the CLI ?
>>
>> thank you
>>
>> Saverio
>>
>>
>> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto 
>> <massimo.sgarava...@gmail.com<mailto:massimo.sgarava...@gmail.com>>:
>>> We are currently running Mitaka (preparing to update to Ocata). I see the
>>> same behavior on an Ocata based testbed
>>>
>>> Thanks, Massimo
>>>
>>> 2017-09-25 9:50 GMT+02:00 Saverio Proto 
>>> <ziopr...@gmail.com<mailto:ziopr...@gmail.com>>:
>>>>
>>>> Hello Massimo,
>>>>
>>>> what is your version of Openstack ??
>>>>
>>>> thank you
>>>>
>>>> Saverio
>>>>
>>>> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto
>>>> <massimo.sgarava...@gmail.com<mailto:massimo.sgarava...@gmail.com>>:
>>>>> Hi
>>>>>
>>>>>
>>>>> In our OpenStack cloud we have two backends for Cinder (exposed using
>>>>> two
>>>>> volume types), and we set different quotas for these two volume types.
>>>>>
>>>>> The problem happens when a user, using the dashboard, tries to create a
>>>>> volume using a volume type for which the project quota is over:
>>>>>
>>>>> - the reported error message simply reports "unable to create volume",
>>>>> without mentioning that the problem is with quota
>>>>>
>>>>> - (by default) the dashboard only shows the overall Cinder quota (and
>>>>> not
>>>>> the quota per volume)
>>>>>
>>>>>
>>>>> Do you know if it possible in some to expose on the dashboard the cinder
>>>>> quota per volume type ?
>>>>>
>>>>>
>>>>> Thanks, Massimo
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ___
>>>>> OpenStack-operators mailing list
>>>>> OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>
>>>
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> --
> Arne Wiebalck
> CERN IT
>


--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Arne Wiebalck
Massimo, Saverio,

We faced the same issue and have created patches for Horizon to display
- the per volume quota in the volume request panel, and also
- additional information about the volume type (like IOPS and throughput 
limits, intended usage etc.)

The patches will need some polishing before being sent upstream (I’ll need
need to cross-check with our Horizon experts), but we use them in prod since
quite a while and are happy to already share patch files if you’re interested.

Cheers,
 Arne



> On 25 Sep 2017, at 09:58, Saverio Proto <ziopr...@gmail.com> wrote:
> 
> I am pinging on IRC robcresswell from the Horizon project. He is still
> PTL I think.
> If you are on IRC please join #openstack-horizon.
> 
> We should ask the Horizon PTL how to get this feature request into
> implementation.
> 
> With the command line interface, can you already see the two different
> quotas for the two different volume types ? Can you paste an example
> output from the CLI ?
> 
> thank you
> 
> Saverio
> 
> 
> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com>:
>> We are currently running Mitaka (preparing to update to Ocata). I see the
>> same behavior on an Ocata based testbed
>> 
>> Thanks, Massimo
>> 
>> 2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
>>> 
>>> Hello Massimo,
>>> 
>>> what is your version of Openstack ??
>>> 
>>> thank you
>>> 
>>> Saverio
>>> 
>>> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto
>>> <massimo.sgarava...@gmail.com>:
>>>> Hi
>>>> 
>>>> 
>>>> In our OpenStack cloud we have two backends for Cinder (exposed using
>>>> two
>>>> volume types), and we set different quotas for these two volume types.
>>>> 
>>>> The problem happens when a user, using the dashboard, tries to create a
>>>> volume using a volume type for which the project quota is over:
>>>> 
>>>> - the reported error message simply reports "unable to create volume",
>>>> without mentioning that the problem is with quota
>>>> 
>>>> - (by default) the dashboard only shows the overall Cinder quota (and
>>>> not
>>>> the quota per volume)
>>>> 
>>>> 
>>>> Do you know if it possible in some to expose on the dashboard the cinder
>>>> quota per volume type ?
>>>> 
>>>> 
>>>> Thanks, Massimo
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ___
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators@lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>> 
>> 
>> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Arne Wiebalck

> On 13 Sep 2017, at 16:52, Matt Riedemann <mriede...@gmail.com> wrote:
> 
> On 9/13/2017 3:24 AM, Arne Wiebalck wrote:
>> I’m reviving this thread to check if the suggestion to address potentially 
>> stale connection
>> data by an admin command (or a scheduled task) made it to the planning for 
>> one of the
>> upcoming releases?
> 
> It hasn't, but we're at the PTG this week so I can throw it on the list of 
> topics.


That’d be great, thanks!

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Arne Wiebalck

> On 13 Sep 2017, at 16:52, Matt Riedemann <mriede...@gmail.com> wrote:
> 
> On 9/13/2017 3:24 AM, Arne Wiebalck wrote:
>> I’m reviving this thread to check if the suggestion to address potentially 
>> stale connection
>> data by an admin command (or a scheduled task) made it to the planning for 
>> one of the
>> upcoming releases?
> 
> It hasn't, but we're at the PTG this week so I can throw it on the list of 
> topics.


That’d be great, thanks!

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Arne Wiebalck
Matt, all,

I’m reviving this thread to check if the suggestion to address potentially 
stale connection
data by an admin command (or a scheduled task) made it to the planning for one 
of the
upcoming releases?

Thanks!
 Arne


On 16 Jun 2017, at 09:37, Saverio Proto 
<ziopr...@gmail.com<mailto:ziopr...@gmail.com>> wrote:

Hello Matt,

It is true that we are refreshing something that rarely changes. But
if you deliver a cloud service for several years, at one point you
might have to do these parameters changes.

Something that should not change rarely are the secrets of the ceph
users to talk to the ceph cluster. Good security would suggest
periodic secret rotation, but today this is not really feasible.

I know the problem is also that you cannot change stuff in libvirt
while the VMs are running. Maybe is time for a discussion with libvirt
developers to make our voice louder about required features ?

The goal would be to change on the fly the ceph/rbd secret that a VM
uses to access a volume, while the VM is running. I think this is very
important.

thank you

Saverio


2017-06-09 6:15 GMT+02:00 Matt Riedemann 
<mriede...@gmail.com<mailto:mriede...@gmail.com>>:
On 6/8/2017 1:39 PM, melanie witt wrote:

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:

Nova stores the output of the Cinder os-initialize_connection info API in
the Nova block_device_mappings table, and uses that later for making volume
connections.

This data can get out of whack or need to be refreshed, like if your ceph
server IP changes, or you need to recycle some secret uuid for your ceph
cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.


I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record for
the instance. Maybe detach/re-attach would work too but I can't remember
trying it.


Shelve has it's own fun set of problems like the fact it doesn't terminate
the connection to the volume backend on shelve. Maybe that's not a problem
for Ceph, I don't know. You do end up on another host though potentially,
and it's a full delete and spawn of the guest on that other host. Definitely
disruptive.


I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an operator.
Does anyone see value in this? Are operators doing stuff like this already,
but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from Cinder
and updates the BDM table in the nova DB. It could be an admin action API,
or part of the os-server-external-events API, like what we have for the
'network-changed' event sent from Neutron which nova uses to refresh the
network info cache.

Other ideas or feedback here?


We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connection_info refresh +
record update inline with the request flows that will end up reading
connection_info from the block device mapping records. That way, operators
won't have to intervene when connection_info changes.


The thing that sucks about this is if we're going to be refreshing something
that maybe rarely changes for every volume-related operation on the
instance. That seems like a lot of overhead to me (nova/cinder API
interactions, Cinder interactions to the volume backend, nova-compute round
trips to conductor and the DB to update the BDM table, etc).


At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it will
continue to use its existing connection to the Ceph cluster. Things go wrong
when an instance action such as resize, stop/start, or reboot is done
because when the instance is taken offline and being brought back up, the
stale connection_info is read from the block_device_mapping table and
injected into the instance, and so it loses contact with the cluster. If we
query Cinder and update the block_device_mapping record at the beginning of
those actions, the instance will get the new connection_info.

-melanie




--

Thanks,

Matt


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openst

Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Arne Wiebalck
Matt, all,

I’m reviving this thread to check if the suggestion to address potentially 
stale connection
data by an admin command (or a scheduled task) made it to the planning for one 
of the
upcoming releases?

Thanks!
 Arne


On 16 Jun 2017, at 09:37, Saverio Proto 
<ziopr...@gmail.com<mailto:ziopr...@gmail.com>> wrote:

Hello Matt,

It is true that we are refreshing something that rarely changes. But
if you deliver a cloud service for several years, at one point you
might have to do these parameters changes.

Something that should not change rarely are the secrets of the ceph
users to talk to the ceph cluster. Good security would suggest
periodic secret rotation, but today this is not really feasible.

I know the problem is also that you cannot change stuff in libvirt
while the VMs are running. Maybe is time for a discussion with libvirt
developers to make our voice louder about required features ?

The goal would be to change on the fly the ceph/rbd secret that a VM
uses to access a volume, while the VM is running. I think this is very
important.

thank you

Saverio


2017-06-09 6:15 GMT+02:00 Matt Riedemann 
<mriede...@gmail.com<mailto:mriede...@gmail.com>>:
On 6/8/2017 1:39 PM, melanie witt wrote:

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:

Nova stores the output of the Cinder os-initialize_connection info API in
the Nova block_device_mappings table, and uses that later for making volume
connections.

This data can get out of whack or need to be refreshed, like if your ceph
server IP changes, or you need to recycle some secret uuid for your ceph
cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.


I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record for
the instance. Maybe detach/re-attach would work too but I can't remember
trying it.


Shelve has it's own fun set of problems like the fact it doesn't terminate
the connection to the volume backend on shelve. Maybe that's not a problem
for Ceph, I don't know. You do end up on another host though potentially,
and it's a full delete and spawn of the guest on that other host. Definitely
disruptive.


I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an operator.
Does anyone see value in this? Are operators doing stuff like this already,
but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from Cinder
and updates the BDM table in the nova DB. It could be an admin action API,
or part of the os-server-external-events API, like what we have for the
'network-changed' event sent from Neutron which nova uses to refresh the
network info cache.

Other ideas or feedback here?


We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connection_info refresh +
record update inline with the request flows that will end up reading
connection_info from the block device mapping records. That way, operators
won't have to intervene when connection_info changes.


The thing that sucks about this is if we're going to be refreshing something
that maybe rarely changes for every volume-related operation on the
instance. That seems like a lot of overhead to me (nova/cinder API
interactions, Cinder interactions to the volume backend, nova-compute round
trips to conductor and the DB to update the BDM table, etc).


At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it will
continue to use its existing connection to the Ceph cluster. Things go wrong
when an instance action such as resize, stop/start, or reboot is done
because when the instance is taken offline and being brought back up, the
stale connection_info is read from the block_device_mapping table and
injected into the instance, and so it loses contact with the cluster. If we
query Cinder and update the block_device_mapping record at the beginning of
those actions, the instance will get the new connection_info.

-melanie




--

Thanks,

Matt


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org<mailto:openstack-operat...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org<mailto:openstack-operat...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openst

Re: [Openstack-operators] Not able to take snapshot backup

2017-08-17 Thread Arne Wiebalck
Anwar,

“Too many connections” sounds like you hit a limit on the database. You’ll need 
to
check if the connection limit is too low or if there are connections which are 
stuck.

HTH,
 Arne


On 17 Aug 2017, at 09:07, Anwar Durrani 
<durrani.an...@gmail.com<mailto:durrani.an...@gmail.com>> wrote:

Hi All,

I have openstack Kilo installed, presently i am not able to snapshot of any 
instance, when i have checked logs on server it says following :


cinder OperationalError: (OperationalError) (1040, 'Too many connections') None 
None

If you have any clue for the same ?


Thanks

--
Thanks & regards,
Anwar M. Durrani
+91-9923205011
<http://in.linkedin.com/pub/anwar-durrani/20/b55/60b>


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder 10.0.4 (latest Ocata) broken for ceph/rbd

2017-08-03 Thread Arne Wiebalck
Mike,

For a sufficiently large number of volumes, the thin provisioning stats 
gathering could break things already
before the referenced patch:

https://bugs.launchpad.net/cinder/+bug/1704106 


It seems, however, that the attempt to gather at least the correct data (used 
instead of allocated) lowers that
threshold even further.

In order to allow our c-vol to start (and as we don’t use over-provisioning), 
we’ve for now commented out the
usage stats gathering.  

Cheers,
 Arne




> On 03 Aug 2017, at 20:47, Mike Lowe  wrote:
> 
> I did the minor point release update from 10.0.2 to 10.0.4 and found my 
> cinder volume services would go out to lunch during startup. They would do 
> their initial heartbeat then get marked as dead never sending another 
> heartbeat.  The process was running and there were constant logs about ceph 
> connections but what was missing was the follow up to "Initializing RPC 
> dependent components of volume driver RBDDriver (1.2.0)”. It never finished 
> the rpc init "Driver post RPC initialization completed successfully.”  
> Digging in a little bit with my limited knowledge of the python librbd it 
> seems that this commit landed in 10.0.4 
> https://github.com/openstack/cinder/commit/e72dead5ce085a6ba66f7aad2ff58061842f43d2
>   Instead of looping over the volume size for every volume it looped over all 
> the volumes calling diff_iterate from offset 0 to the end.   Near as I can 
> tell this actually calls whatever you pass in as iterate_cb for every used 
> extent of the volume. So a handful of empty volumes no problem, but in 
> production by my count I would have to call iterate_cb 12.6M times just to 
> add up the bytes used from each extent.   I’ve filed a bug 
> https://bugs.launchpad.net/cinder/+bug/1708507 and downgrading to 10.0.2 
> seems to be an ok workaround.
> 
> TLDR; if you have ceph don’t upgrade past 10.0.2, for the time being
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Cinder - Could not start Cinder Volume service-Ocata

2017-06-21 Thread Arne Wiebalck
nUnknown(version=version_cap)
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume CappedVersionUnknown: 
Unrecoverable Error: Versioned Objects in DB are capped to unknown version 1.21.
2017-06-18 05:11:51.358 5230 ERROR cinder.cmd.volume
2017-06-18 05:11:51.360 5230 ERROR cinder.cmd.volume 
[req-85789cbc-b26c-47c5-be34-035fae86e504 - - - - -] No volume service(s) 
started successfully, terminating.
root@cloud1:/etc/cinder#






From: "SGopinath s.gopinath" <s.gopin...@gov.in<mailto:s.gopin...@gov.in>>
To: "openstack" 
<openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>>
Sent: Thursday, June 15, 2017 9:07:19 AM
Subject: [Openstack] nova - Error in cells

Hi ,


I'm trying to install Openstack Ocata in
Ubuntu 16.04.2 LTS.

I could able to start nova services and successfully
could get the output on executing

openstack hypervisor list ...

No issues...


But when I execute

 su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

I get the error

 ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 
'nova_api.compute_nodes' doesn't exist")

I could not find compute_nodes table in the database nova_api.
However the compute_nodes table is in nova_api_cell0 database and it
does not contain any rows.


I think there is only a minor issue in assuming where the table is in which
database.

Could anyone suggest a solution for this.

Thanks,
S.Gopinath



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Arne Wiebalck

> On 08 Jun 2017, at 17:52, Matt Riedemann <mriede...@gmail.com> wrote:
> 
> On 6/8/2017 10:17 AM, Arne Wiebalck wrote:
>>> On 08 Jun 2017, at 15:58, Matt Riedemann <mriede...@gmail.com 
>>> <mailto:mriede...@gmail.com>> wrote:
>>> 
>>> Nova stores the output of the Cinder os-initialize_connection info API in 
>>> the Nova block_device_mappings table, and uses that later for making volume 
>>> connections.
>>> 
>>> This data can get out of whack or need to be refreshed, like if your ceph 
>>> server IP changes, or you need to recycle some secret uuid for your ceph 
>>> cluster.
>>> 
>>> I think the only ways to do this on the nova side today are via volume 
>>> detach/re-attach, reboot, migrations, etc - all of which, except live 
>>> migration, are disruptive to the running guest.
>>> 
>>> I've kicked around the idea of adding some sort of admin API interface for 
>>> refreshing the BDM.connection_info on-demand if needed by an operator. Does 
>>> anyone see value in this? Are operators doing stuff like this already, but 
>>> maybe via direct DB updates?
>>> 
>>> We could have something in the compute API which calls down to the compute 
>>> for an instance and has it refresh the connection_info from Cinder and 
>>> updates the BDM table in the nova DB. It could be an admin action API, or 
>>> part of the os-server-external-events API, like what we have for the 
>>> 'network-changed' event sent from Neutron which nova uses to refresh the 
>>> network info cache.
>>> 
>>> Other ideas or feedback here?
>> I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue 
>> some time ago.
>> Back then I was more thinking of using an alias and not deal with IP 
>> addresses directly. From
>> what I understand, this should work with Ceph. In any case, there is still 
>> interest in a fix :-)
>> Cheers,
>>  Arne
>> --
>> Arne Wiebalck
>> CERN IT
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> Yeah this was also discussed in the dev mailing list over a year ago:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html 
> <http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html>
> 
> At that time I was opposed to a REST API for a *user* doing this, but I'm 
> more open to an *admin* (by default) doing this. Also, if it were initiated 
> via the volume API then Cinder could call the Nova os-server-external-events 
> API which is admin-only by default and then Nova can do a refresh.
> 
> Later in that thread Melanie Witt also has an idea about doing a refresh in a 
> periodic task on the compute service, like we do for refreshing the instance 
> network info cache with Neutron in a periodic task.

Wouldn’t using a mon alias (and not resolving it to the respective IP 
addresses) be enough? Or is that too backend specific?

The idea of a periodic task leveraging existing techniques sounds really nice, 
but if the overhead is regarded as too much (in the end, the IP addresses 
shouldn’t change that often), an admin only API to be called when the addresses 
need to be updated sounds good to me as well.

Cheers,
 Arne

—
Arne Wiebalck
CERN IT___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Arne Wiebalck

> On 08 Jun 2017, at 17:52, Matt Riedemann <mriede...@gmail.com> wrote:
> 
> On 6/8/2017 10:17 AM, Arne Wiebalck wrote:
>>> On 08 Jun 2017, at 15:58, Matt Riedemann <mriede...@gmail.com 
>>> <mailto:mriede...@gmail.com>> wrote:
>>> 
>>> Nova stores the output of the Cinder os-initialize_connection info API in 
>>> the Nova block_device_mappings table, and uses that later for making volume 
>>> connections.
>>> 
>>> This data can get out of whack or need to be refreshed, like if your ceph 
>>> server IP changes, or you need to recycle some secret uuid for your ceph 
>>> cluster.
>>> 
>>> I think the only ways to do this on the nova side today are via volume 
>>> detach/re-attach, reboot, migrations, etc - all of which, except live 
>>> migration, are disruptive to the running guest.
>>> 
>>> I've kicked around the idea of adding some sort of admin API interface for 
>>> refreshing the BDM.connection_info on-demand if needed by an operator. Does 
>>> anyone see value in this? Are operators doing stuff like this already, but 
>>> maybe via direct DB updates?
>>> 
>>> We could have something in the compute API which calls down to the compute 
>>> for an instance and has it refresh the connection_info from Cinder and 
>>> updates the BDM table in the nova DB. It could be an admin action API, or 
>>> part of the os-server-external-events API, like what we have for the 
>>> 'network-changed' event sent from Neutron which nova uses to refresh the 
>>> network info cache.
>>> 
>>> Other ideas or feedback here?
>> I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue 
>> some time ago.
>> Back then I was more thinking of using an alias and not deal with IP 
>> addresses directly. From
>> what I understand, this should work with Ceph. In any case, there is still 
>> interest in a fix :-)
>> Cheers,
>>  Arne
>> --
>> Arne Wiebalck
>> CERN IT
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> Yeah this was also discussed in the dev mailing list over a year ago:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html 
> <http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html>
> 
> At that time I was opposed to a REST API for a *user* doing this, but I'm 
> more open to an *admin* (by default) doing this. Also, if it were initiated 
> via the volume API then Cinder could call the Nova os-server-external-events 
> API which is admin-only by default and then Nova can do a refresh.
> 
> Later in that thread Melanie Witt also has an idea about doing a refresh in a 
> periodic task on the compute service, like we do for refreshing the instance 
> network info cache with Neutron in a periodic task.

Wouldn’t using a mon alias (and not resolving it to the respective IP 
addresses) be enough? Or is that too backend specific?

The idea of a periodic task leveraging existing techniques sounds really nice, 
but if the overhead is regarded as too much (in the end, the IP addresses 
shouldn’t change that often), an admin only API to be called when the addresses 
need to be updated sounds good to me as well.

Cheers,
 Arne

—
Arne Wiebalck
CERN IT__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Arne Wiebalck

On 08 Jun 2017, at 15:58, Matt Riedemann 
<mriede...@gmail.com<mailto:mriede...@gmail.com>> wrote:

Nova stores the output of the Cinder os-initialize_connection info API in the 
Nova block_device_mappings table, and uses that later for making volume 
connections.

This data can get out of whack or need to be refreshed, like if your ceph 
server IP changes, or you need to recycle some secret uuid for your ceph 
cluster.

I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.

I've kicked around the idea of adding some sort of admin API interface for 
refreshing the BDM.connection_info on-demand if needed by an operator. Does 
anyone see value in this? Are operators doing stuff like this already, but 
maybe via direct DB updates?

We could have something in the compute API which calls down to the compute for 
an instance and has it refresh the connection_info from Cinder and updates the 
BDM table in the nova DB. It could be an admin action API, or part of the 
os-server-external-events API, like what we have for the 'network-changed' 
event sent from Neutron which nova uses to refresh the network info cache.

Other ideas or feedback here?

I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue 
some time ago.
Back then I was more thinking of using an alias and not deal with IP addresses 
directly. From
what I understand, this should work with Ceph. In any case, there is still 
interest in a fix :-)

Cheers,
 Arne


--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Arne Wiebalck

On 08 Jun 2017, at 15:58, Matt Riedemann 
<mriede...@gmail.com<mailto:mriede...@gmail.com>> wrote:

Nova stores the output of the Cinder os-initialize_connection info API in the 
Nova block_device_mappings table, and uses that later for making volume 
connections.

This data can get out of whack or need to be refreshed, like if your ceph 
server IP changes, or you need to recycle some secret uuid for your ceph 
cluster.

I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.

I've kicked around the idea of adding some sort of admin API interface for 
refreshing the BDM.connection_info on-demand if needed by an operator. Does 
anyone see value in this? Are operators doing stuff like this already, but 
maybe via direct DB updates?

We could have something in the compute API which calls down to the compute for 
an instance and has it refresh the connection_info from Cinder and updates the 
BDM table in the nova DB. It could be an admin action API, or part of the 
os-server-external-events API, like what we have for the 'network-changed' 
event sent from Neutron which nova uses to refresh the network info cache.

Other ideas or feedback here?

I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue 
some time ago.
Back then I was more thinking of using an alias and not deal with IP addresses 
directly. From
what I understand, this should work with Ceph. In any case, there is still 
interest in a fix :-)

Cheers,
 Arne


--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [cinder] Thoughts on cinder readiness

2017-06-01 Thread Arne Wiebalck
e a single place where
>>other operators can specify how ready they believe a project is for
>>a given release and for a given configuration; and ideally provide
>>details/comments as to why they believe this).
>> >
>> > -Josh
>> >
>> >
>> >
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>><mailto:OpenStack-operators@lists.openstack.org>
>> >
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>> 
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>><mailto:OpenStack-operators@lists.openstack.org>
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DB deadlocks due to connection string

2017-05-23 Thread Arne Wiebalck
As discussed on the Cinder channel, I’ve opened

https://bugs.launchpad.net/oslo.db/+bug/1692956

to see if oslo.db would be a good place to produce a warning when it detects
this potential misconfiguration.

Cheers,
 Arne

On 23 May 2017, at 17:25, Sean McGinnis 
<sean.mcgin...@gmx.com<mailto:sean.mcgin...@gmx.com>> wrote:

Just wanted to put this out there to hopefully spread awareness and
prevent it from happening more.

We had a bug reported in Cinder of hitting a deadlock when attempting
to deelte multiple volumes simultaneously:

https://bugs.launchpad.net/cinder/+bug/1685818

Some were seeing it, but others were not able to reproduce the error
in their environments.

What it came down to is the use of "mysql://" vs "mysql+pymysql://"
for the database connection string. Big thanks to Gerhard Muntingh
for noticing this difference.

Basically, when using "mysql://" for the connection string, that uses
blocking calls that prevent other "threads" from running at the same
time, causing these deadlocks.

This doesn't just impact Cinder, so I wanted to get the word out that
it may be worth checking your configurations and make sure you are
using "mysql+pymysql://" for your connections.

Sean


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Ocata Access and Security Tab

2017-03-20 Thread Arne Wiebalck
Georgios,

Actually, the documentation you reference says that the panel has been split 
into two
new panels which I guess are the “API Access” and “Key Pairs” panels you see, 
no?

Cheers,
 Arne

> On 20 Mar 2017, at 20:48, Georgios Dimitrakakis <gior...@acmac.uoc.gr> wrote:
> 
> Hi Arne,
> 
> unfortunately mine Ocata version does not have that.
> 
> I am attaching what I am seeing both as an admin and as a non-privileged user.
> 
> Furthermore I cannot find anything like that either in API Access or in 
> KeyPairs sections.
> 
> Best,
> 
> G.
> 
> 
> On Mon, 20 Mar 2017 20:34:51 +0100, Arne Wiebalck wrote:
>> Georgios,
>> 
>> This should be on the left hand side panel: Click on Project —>
>> Compute —> Access & Security.
>> I attach a screenshot how it looks here.
>> 
>> HTH,
>> Arne
>> 
>>> On 20 Mar 2017, at 20:13, Georgios Dimitrakakis wrote:
>>> 
>>> In my OpenStack Ocata installation I am trying to find where the
>>> "Access and Security" tab is.
>>> 
>>> According to 
>>> this:https://docs.openstack.org/releasenotes/horizon/ocata.html [2]
>>> 
>>> there should be a new panel but I am unable to see it!!!
>>> 
>>> Any help will be very much appreciated!
>>> 
>>> Regards,
>>> 
>>> G.
>>> 
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]
>>> Post to : openstack@lists.openstack.org [4]
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [5]
>> 
>> 
>> 
>> Links:
>> --
>> [1] mailto:gior...@acmac.uoc.gr
>> [2] https://docs.openstack.org/releasenotes/horizon/ocata.html
>> [3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> [4] mailto:openstack@lists.openstack.org
>> [5] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Milan Ops Midcycle - Cells v2 session

2017-03-17 Thread Arne Wiebalck
Hi Matt,

> On 17 Mar 2017, at 01:41, Matt Riedemann <mriede...@gmail.com> wrote:
> 
> On 3/14/2017 4:11 AM, Arne Wiebalck wrote:
>> A first list of topics for the Cells v2 session is available here:
>> 
>> https://etherpad.openstack.org/p/MIL-ops-cellsv2
>> 
>> Please feel free to add items you’d like to see discussed.
>> 
>> Thanks!
>> Belmiro & Arne
>> 
>> --
>> Arne Wiebalck
>> CERN IT
>> 
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
> 
> Hi,
> 
> I've gone through the MIL ops midcycle etherpad for cells v2 [1] and left 
> some notes, answers, links to the PTG cells v2 recap, and some 
> questions/feedback of my own.

Thanks for updating the etherpad.

> Specifically, there was a request that some nova developers could be at the 
> ops meetup session and as noted in the etherpad, the fact this was happening 
> came as a late surprise to several of us. The developers are already trying 
> to get funding to the PTG and the summit (if they are lucky), and throwing in 
> a third travel venue is tough, especially with little to no direct advance 
> notice. Please ping us in IRC or direct email, or put it on the weekly nova 
> meeting agenda as a reminder. Then we can try and get someone there if 
> possible.

Great, thanks. I think the cells v2 session at the MIL ops meet up was somewhat 
special as none of the attendees (except for me) was
using cells v1, and only one site was already on Mitaka and had hence seen the 
first signs of v2 in its deployment. So, while these session
live from people sharing their experience, this one was more on the concept of 
cells and their advantages in general and then some theory
about v2 that I had prepared reading through release notes. That’s where I 
thought that for changes that are are 2 or 3 releases away for most
operators but that will be mandatory, a developer would be in much better 
position to give that overview and answer specific questions than I
was. This is of course not limited to nova, and I gave that feedback to Melvin 
as well for future ops meet-ups. 

Maybe it was simply a little too early for a cells v2 session :-)

> 
> If you're going to be in Boston for the Forum and are interested in talking 
> about Nova, our topic brainstorming etherpad is here [2].
> 
> [1] https://etherpad.openstack.org/p/MIL-ops-cellsv2
> [2] https://etherpad.openstack.org/p/BOS-Nova-brainstorming


As you probably saw on the etherpad, there is interest from the operators’ side 
in a discussion in Boston about cells v2, would be great if
we could make this happen.

Cheers,
 Arne

--
Arne Wiebalck
CERN IT


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Milan Ops Midcycle - Cells v2 session

2017-03-14 Thread Arne Wiebalck
A first list of topics for the Cells v2 session is available here:

https://etherpad.openstack.org/p/MIL-ops-cellsv2

Please feel free to add items you’d like to see discussed.

Thanks!
 Belmiro & Arne

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] [OpenStack] VM Disk Quota

2017-02-14 Thread Arne Wiebalck
Hi,

With Ceph as the backend, we use Cinder’s QoS feature to limit IOPS on users’ 
volumes:
http://ceph.com/planet/openstack-ceph-rbd-and-qos/

We introduced this mostly to avoid a user exhausting the IOPS we have available 
on the
Ceph cluster. For the local root disks, we do not limit the IO.

We have configured different volume types (standard and high IOPS), and the 
feature
works really well for us.

HTH,
 Arne


> On 15 Feb 2017, at 06:10, Xu, Rongjie (Nokia - CN/Hangzhou) 
> <rongjie...@nokia.com> wrote:
> 
> Hi
>  
> I want to limit VM IOPS. Currently I am using “quota:disk_total_iops_sec” in 
> Flavor. My question is:
>  
> Does it set the IOPS limitation just for root disk? What if I have a cinder 
> volume attached to the VM?
>  
> Thanks.
>  
> Best Regards
> Xu Rongjie (Max)
>  
>  
>  
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

—
Arne Wiebalck
CERN IT
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova assign isntances to cpu pinning

2017-02-07 Thread Arne Wiebalck

> On 07 Feb 2017, at 20:05, Steve Gordon <sgor...@redhat.com> wrote:
> 
> 
> 
> - Original Message -
>> From: "Arne Wiebalck" <arne.wieba...@cern.ch>
>> To: "Steve Gordon" <sgor...@redhat.com>
>> Cc: "Manuel Sopena Ballesteros" <manuel...@garvan.org.au>, 
>> openstack@lists.openstack.org
>> Sent: Tuesday, February 7, 2017 2:00:23 PM
>> Subject: Re: [Openstack] nova assign isntances to cpu pinning
>> 
>> 
>>> On 07 Feb 2017, at 18:57, Steve Gordon <sgor...@redhat.com> wrote:
>>> 
>>> - Original Message -
>>>> From: "Arne Wiebalck" <arne.wieba...@cern.ch
>>>> <mailto:arne.wieba...@cern.ch>>
>>>> To: "Manuel Sopena Ballesteros" <manuel...@garvan.org.au
>>>> <mailto:manuel...@garvan.org.au>>
>>>> Cc: openstack@lists.openstack.org <mailto:openstack@lists.openstack.org>
>>>> Sent: Tuesday, February 7, 2017 2:46:39 AM
>>>> Subject: Re: [Openstack] nova assign isntances to cpu pinning
>>>> 
>>>> Manuel,
>>>> 
>>>> Rather than with aggregate metadata we assign instances to NUMA nodes via
>>>> flavor extra_specs,
>>> 
>>> These are not necessarily mutually exclusive, the purpose of the aggregates
>>> in the documentation (and assumed in the original design) is to segregate
>>> compute nodes for dedicated guests (if using truly dedicated CPU by
>>> setting "hw:cpu_policy" to "dedicated" as Chris mentions) from those for
>>> over-committed guests. If you are only using the NUMA node alignment (as
>>> shown below) this doesn't apply, because it's only guaranteeing how many
>>> nodes your guest will be spread across not that it will have dedicated
>>> access to the CPU(s) it is on. Folks who want truly dedicated vCPU:pCPU
>>> mappings should still use the aggregates, unless *only* running workloads
>>> with dedicated CPU needs.
>> 
>> Right: we’re using cells to separate computing-intensive from overcommitted
>> resources, configured the flavor
>> extra-specs for NUMA nodes (and huge pages) only in the compute part and have
>> made good experiences
>> with this setup.
>> Depending on the use case and the individual deployment, there are certainly
>> different options to set things up.
>> If not already available somewhere, it may be good to document the options
>> depending on needs and setup?
>> 
>> Cheers,
>> Arne
> 
> A fairly decent amount of it was contributed by some of the Nova folks who 
> worked on this functionality and ended up in the Admin Guide here:
> 
> http://docs.openstack.org/admin-guide/compute-adv-config.html 
> <http://docs.openstack.org/admin-guide/compute-adv-config.html>

Awesome, thanks for pointing this out!

Cheers,
 Arne

> 
>>> -Steve
>>> 
>>>> i.e. nova flavor-show reports something like
>>>> 
>>>> —>
>>>> | extra_specs| {"hw:numa_nodes": "1"} |
>>>> <—
>>>> 
>>>> for our NUMA-aware flavors.
>>>> 
>>>> This seems to work pretty well and gives the desired performance
>>>> improvement.
>>>> 
>>>> Cheers,
>>>> Arne
>>>> 
>>>> 
>>>> 
>>>>> On 07 Feb 2017, at 01:19, Manuel Sopena Ballesteros
>>>>> <manuel...@garvan.org.au> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I am trying to isolate my instances by cpu socket in order to improve my
>>>>> NUMA hardware performance.
>>>>> 
>>>>> [root@openstack-dev ~(keystone_admin)]# nova aggregate-set-metadata numa
>>>>> pinned=true
>>>>> Metadata has been successfully updated for aggregate 1.
>>>>> ++++++
>>>>> | Id | Name | Availability Zone | Hosts | Metadata|
>>>>> ++++++
>>>>> | 1  | numa   | -   | |
>>>>> | 'pinned=true' |
>>>>> ++++++
>>>>> 
>>>>> I have done the changes on the nova metadata but my admin can’t see the
>>>>> instances
>>>>> 
>>>>> [root@openstack-dev ~(ke

Re: [Openstack] nova assign isntances to cpu pinning

2017-02-06 Thread Arne Wiebalck
Manuel,

Rather than with aggregate metadata we assign instances to NUMA nodes via 
flavor extra_specs,
i.e. nova flavor-show reports something like 

—>
| extra_specs| {"hw:numa_nodes": "1"} | 
<—

for our NUMA-aware flavors. 

This seems to work pretty well and gives the desired performance improvement.

Cheers,
 Arne



> On 07 Feb 2017, at 01:19, Manuel Sopena Ballesteros <manuel...@garvan.org.au> 
> wrote:
> 
> Hi,
>  
> I am trying to isolate my instances by cpu socket in order to improve my NUMA 
> hardware performance.
>  
> [root@openstack-dev ~(keystone_admin)]# nova aggregate-set-metadata numa 
> pinned=true
> Metadata has been successfully updated for aggregate 1.
> ++++++
> | Id | Name | Availability Zone | Hosts | Metadata|
> ++++++
> | 1  | numa   | -   | | 'pinned=true' 
> |
> ++++++
>  
> I have done the changes on the nova metadata but my admin can’t see the 
> instances
>  
> [root@openstack-dev ~(keystone_admin)]# nova aggregate-add-host numa 
> 4d4f3c3f-2894-4244-b74c-2c479e296ff8
> ERROR (NotFound): Compute host 4d4f3c3f-2894-4244-b74c-2c479e296ff8 could not 
> be found. (HTTP 404) (Request-ID: req-286985d8-d6ce-429e-b234-dd5eac5ad62e)
>  
> And the user who has access to those instances does not have privileges to 
> add the hosts
>  
> [root@openstack-dev ~(keystone_myuser)]# nova aggregate-add-host numa 
> 4d4f3c3f-2894-4244-b74c-2c479e296ff8
> ERROR (Forbidden): Policy doesn't allow os_compute_api:os-aggregates:index to 
> be performed. (HTTP 403) (Request-ID: 
> req-a5687fd4-c00d-4b64-af9e-bd5a82eb99c1)
>  
> What would be the recommended way to do this?
>  
> Thank you very much
>  
> Manuel Sopena Ballesteros | Big data Engineer
> Garvan Institute of Medical Research 
> The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
> T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel...@garvan.org.au
>  
> NOTICE
> Please consider the environment before printing this email. This message and 
> any attachments are intended for the addressee named and may contain legally 
> privileged/confidential/copyright information. If you are not the intended 
> recipient, you should not read, use, disclose, copy or distribute this 
> communication. If you have received this message in error please notify us at 
> once by return email and then delete both messages. We accept no liability 
> for the distribution of viruses or similar in electronic communications. This 
> notice should not be removed.
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] CentOS 7.3 libvirt appears to be broken for virtio-scsi and ceph with cephx auth

2016-12-21 Thread Arne Wiebalck
From the quoted mail thread and a quick test I just did it seems that this 
issue only affects virtio-scsi, not virtio-blk.
Does anyone know if that is correct?

Thanks!
 Arne 

> On 20 Dec 2016, at 17:57, Mike Lowe <joml...@iu.edu> wrote:
> 
> I got a rather nasty surprise upgrading from CentOS 7.2 to 7.3.  As far as I 
> can tell the libvirt 2.0.0 that ships with 7.3 doesn’t behave the same way as 
> the 1.2.17 that ships with 7.2 when using ceph with cephx auth during volume 
> attachment using virtio-scsi.  It looks like it fails to add the cephx 
> secret.  The telltale signs are "No secret with id 'scsi0-0-0-1-secret0’” in 
> the /var/log/libvirt/qemu instance logs.  I’ve filed a bug here 
> https://bugzilla.redhat.com/show_bug.cgi?id=1406442 and there is a libvirt 
> mailing list  thread about a fix for libvirt 2.5.0 for what looks like this 
> same problem 
> https://www.redhat.com/archives/libvir-list/2016-October/msg00396.html  I’m 
> out of ideas for workarounds having had kind of a disastrous attempt at 
> downgrading to libvirt 1.2.17, so if anybody has any suggestions I’m all 
> ears.  
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Max open files limit for nova-api

2016-12-19 Thread Arne Wiebalck
Prashant,

If this is for systemd, how about changing the nova-api unit file?

Something like

—>
[Service]
...
LimitNOFILE=65536
<—

should do it.

Cheers,
 Arne



On 19 Dec 2016, at 17:23, Prashant Shetty 
<prashantshetty1...@gmail.com<mailto:prashantshetty1...@gmail.com>> wrote:

Team,

I have scale setup and metadata requests are seems to fail from instance. Main 
reason for failure is "Max open files" limit(1024) set on nova-api service.
Though on controller we have set max open file limit of 65k(limit.conf), 
nova-api always comes up with 1024 limit causing failure.

Could someone let me know how can we change the max open files limit of 
nova-api service?

Setup Details:

· Single controller
· 500 KVM computes
· Devstack branch: stable/newton
· We have native metadata and dhcp running on platform
· 3750 instances


stack@controller:/opt/stack/logs$ ps aux | grep nova-api
stack 14998 2.2 0.3 272104 121648 pts/8 S+ 09:53 0:14 /usr/bin/python 
/usr/local/bin/nova-api
stack@controller:/opt/stack/logs$
stack@controller:/opt/stack/logs$
stack@controller:/opt/stack/logs$ cat /proc/14998/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 128611 128611 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 128611 128611 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
stack@controller:/opt/stack/logs$

n-api:

2016-11-08 18:44:26.168 30069 INFO nova.metadata.wsgi.server 
[req-fb4d729b-a1cd-4df1-aaf8-3f854a739cce - -] (30069) wsgi exited, 
is_accepting=True
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 457, 
in fire_timers
timer()
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
cb(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 168, in 
_do_send
waiter.switch(result)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/opt/stack/nova/nova/utils.py", line 1066, in context_wrapper
return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 865, in 
server
client_socket = sock.accept()
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 
214, in accept
res = socket_accept(fd)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio/base.py", line 
56, in socket_accept
return descriptor.accept()
  File "/usr/lib/python2.7/socket.py", line 206, in accept
sock, addr = self._sock.accept()
error: [Errno 24] Too many open files

Thanks,
Prashant

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [manila] Access key via the UI?

2016-11-03 Thread Arne Wiebalck
Hi Goutham,

It’s already being returned in the manila access-list API as of 2.21, if you’re 
using the latest python-manilaclient, you should have it there. However, it’s 
missing in the UI:

Yes, I was comparing what I can do on the CLI and what was offered with the UI.

https://github.com/openstack/manila-ui/blob/c986a100eecf46af4b597cdcf59b5bc1edc2d1b0/manila_ui/dashboards/project/shares/shares/tables.py#L278

Are you suggesting to simply display it? (I was more thinking of a one-time 
download when giving access,
similar to what is offered for keys for instance creation.)

Could you open a bug?

Just did it:
https://bugs.launchpad.net/manila-ui/+bug/1638934

Thanks!
 Arne




On 11/3/16, 7:32 AM, "Arne Wiebalck" 
<arne.wieba...@cern.ch<mailto:arne.wieba...@cern.ch>> wrote:

   Hi,

   As cephx has been added as an access type in the dashboard and an access key 
can now
   be part of the API's response to access list requests, is it planned to have 
the access key to
   be returned when adding a cephx access rule via the UI (for instance similar 
to what ‘Create
   Key Pair’ in the instance panel does)? Couldn’t find any mention of such an 
activity.

   Thanks!
    Arne

   --
   Arne Wiebalck
   CERN IT

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Access key via the UI?

2016-11-03 Thread Arne Wiebalck
Hi,

As cephx has been added as an access type in the dashboard and an access key 
can now
be part of the API's response to access list requests, is it planned to have 
the access key to
be returned when adding a cephx access rule via the UI (for instance similar to 
what ‘Create
Key Pair’ in the instance panel does)? Couldn’t find any mention of such an 
activity.

Thanks!
 Arne

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck

On 02 Nov 2016, at 15:15, Ben Swartzlander 
<b...@swartzlander.org<mailto:b...@swartzlander.org>> wrote:

On 11/02/2016 06:23 AM, Arne Wiebalck wrote:
Hi Valeriy,

I wasn’t aware, thanks!

So, if each driver exposes the storage_protocols it supports, would it be 
sensible to have
manila-ui check the extra_specs for this key and limit the protocol choice for 
a given
share type to the supported protocols (in order to avoid that the user tries to 
create
incompatible type/protocol combinations)?

This is not possible today, as any extra_specs related to protocols are hidden 
from normal API users. It's possible to make sure the share type called 
"nfs_shares" always goes to a backend that supports NFS, but it's not possible 
to programatically know that in a client, and therefore it's not possible to 
build the smarts into the UI. We intend to fix this though, as there is no good 
reason to keep that information hidden.

I see, thanks.

Concerning the workaround for bug/1622732: Would you agree that configuring 
protocol/type
tuples (rather than only protocols) would be a better solution?

Cheers,
 Arne


Thanks again!
 Arne


On 02 Nov 2016, at 10:00, Valeriy Ponomaryov 
<vponomar...@mirantis.com<mailto:vponomar...@mirantis.com>> wrote:

Hello, Arne

Each share driver has capability called "storage_protocol". So, for case you 
describe, you should just define such extra spec in your share type that will 
match value reported by desired backend[s].

It is the purpose of extra specs in share types, you (as cloud admin) define 
its connection yourself, either it is strong or not.

Valeriy

On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck 
<arne.wieba...@cern.ch<mailto:arne.wieba...@cern.ch>> wrote:
Hi,

We’re preparing the use of Manila in production and noticed that there seems to 
be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols. If that’s true, 
wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per type, for instance 
as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice to supported 
protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com<http://www.mirantis.com/>
vponomar...@mirantis.com<mailto:vponomar...@mirantis.com>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck

> On 02 Nov 2016, at 11:52, Tom Barron <whui...@gmail.com> wrote:
> 
> 
> 
> On 11/02/2016 06:23 AM, Arne Wiebalck wrote:
>> Hi Valeriy,
>> 
>> I wasn’t aware, thanks! 
>> 
>> So, if each driver exposes the storage_protocols it supports, would it
>> be sensible to have
>> manila-ui check the extra_specs for this key and limit the protocol
>> choice for a given
>> share type to the supported protocols (in order to avoid that the user
>> tries to create
>> incompatible type/protocol combinations)?
> 
> Not necessarily tied to share types, but we have this bug open w.r.t.
> showing only protocols that are available given available backends in
> the actual deployment:
> 
> https://bugs.launchpad.net/manila-ui/+bug/1622732

Thanks for the link, Tom.

As mentioned, I think linking protocols and types would be helpful to guide 
users
during share creation. So, as an intermediate step, how about extending this 
patch
by having protocol/type(s) tuples (rather than only protocols) in the UI config 
file for
Manila and fill the menus in the UI accordingly?

And for a more complete solution, I was wondering if it wouldn't be possible to 
go
over the available share types, extract the supported storage_protocols, and use
these for the protocol pull down menu (and limit the type selection to the ones
supporting the protocol selected by the user). This would avoid that operators 
have
to keep the UI config and the Manila config in sync.

Cheers,
 Arne


> 
>> 
>> Thanks again!
>> Arne
>> 
>> 
>>> On 02 Nov 2016, at 10:00, Valeriy Ponomaryov <vponomar...@mirantis.com
>>> <mailto:vponomar...@mirantis.com>> wrote:
>>> 
>>> Hello, Arne
>>> 
>>> Each share driver has capability called "storage_protocol". So, for
>>> case you describe, you should just define such extra spec in your
>>> share type that will match value reported by desired backend[s].
>>> 
>>> It is the purpose of extra specs in share types, you (as cloud admin)
>>> define its connection yourself, either it is strong or not.
>>> 
>>> Valeriy
>>> 
>>> On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck <arne.wieba...@cern.ch
>>> <mailto:arne.wieba...@cern.ch>> wrote:
>>> 
>>>Hi,
>>> 
>>>We’re preparing the use of Manila in production and noticed that
>>>there seems to be no strong connection
>>>between share types and share protocols.
>>> 
>>>I would think that not all backends will support all protocols. If
>>>that’s true, wouldn’t it be sensible to establish
>>>a stronger relation and have supported protocols defined per type,
>>>for instance as extra_specs (which, as one
>>>example, could then be used by the Manila UI to limit the choice
>>>to supported protocols for a given share
>>>type, rather than maintaining two independent and hard-coded tuples)?
>>> 
>>>Thanks!
>>> Arne
>>> 
>>>--
>>>Arne Wiebalck
>>>CERN IT
>>> 
>>>
>>> __
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>><http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Kind Regards
>>> Valeriy Ponomaryov
>>> www.mirantis.com <http://www.mirantis.com/>
>>> vponomar...@mirantis.com <mailto:vponomar...@mirantis.com>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>>> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> --
>> Arne Wiebalck
>> CERN IT
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck
Hi Valeriy,

I wasn’t aware, thanks!

So, if each driver exposes the storage_protocols it supports, would it be 
sensible to have
manila-ui check the extra_specs for this key and limit the protocol choice for 
a given
share type to the supported protocols (in order to avoid that the user tries to 
create
incompatible type/protocol combinations)?

Thanks again!
 Arne


On 02 Nov 2016, at 10:00, Valeriy Ponomaryov 
<vponomar...@mirantis.com<mailto:vponomar...@mirantis.com>> wrote:

Hello, Arne

Each share driver has capability called "storage_protocol". So, for case you 
describe, you should just define such extra spec in your share type that will 
match value reported by desired backend[s].

It is the purpose of extra specs in share types, you (as cloud admin) define 
its connection yourself, either it is strong or not.

Valeriy

On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck 
<arne.wieba...@cern.ch<mailto:arne.wieba...@cern.ch>> wrote:
Hi,

We’re preparing the use of Manila in production and noticed that there seems to 
be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols. If that’s true, 
wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per type, for instance 
as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice to supported 
protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com<http://www.mirantis.com/>
vponomar...@mirantis.com<mailto:vponomar...@mirantis.com>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck
Hi,

We’re preparing the use of Manila in production and noticed that there seems to 
be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols. If that’s true, 
wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per type, for instance 
as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice to supported 
protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne 

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Database cleanup scripts?

2016-09-02 Thread Arne Wiebalck

On 02 Sep 2016, at 17:08, Matt Fischer 
<m...@mattfischer.com<mailto:m...@mattfischer.com>> wrote:


On Fri, Sep 2, 2016 at 8:57 AM, Abel Lopez 
<alopg...@gmail.com<mailto:alopg...@gmail.com>> wrote:
For cinder, since kilo, we've had 'cinder-manage db purge-deleted'


This is the issue we see with this tool in Liberty, I think this might be fixed 
in M.

# cinder-manage db purge 365
 (some stuff works here)
...

2016-09-02 15:07:02.196 203924 ERROR cinder DBReferenceError: 
(pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent row: a 
foreign key constraint fails (`cinder`.`volume_glance_metadata`, CONSTRAINT 
`volume_glance_metadata_ibfk_2` FOREIGN KEY (`snapshot_id`) REFERENCES 
`snapshots` (`id`))') [SQL: u'DELETE FROM snapshots WHERE snapshots.deleted_at 
< %s'] [parameters: (datetime.datetime(2015, 9, 3, 15, 7, 2, 189701),)]


I think this is fixed here: https://review.openstack.org/#/c/338228/
(which comes with Newton only).

Cheers,
 Arne


--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Delete cinder service

2016-09-02 Thread Arne Wiebalck

On 02 Sep 2016, at 11:50, Nick Jones 
<nick.jo...@datacentred.co.uk<mailto:nick.jo...@datacentred.co.uk>> wrote:

On 2 Sep 2016, at 9:28, William Josefsson wrote:

[..]

Is there any cleanup of volumes entries with deleted=1, or is it
normal these old entries lay around? thx will

There’s a timely blog post from Matt Fischer on exactly that subject:

http://www.mattfischer.com/blog/?p=744

His comment regarding the Cinder DB suggests that this process was broken in 
Liberty, however.  Would be good to have confirmation that it’s been rectified 
in Mitaka.

We actually did try it on Mitaka on a copy of our DB and ran into this one:
https://bugs.launchpad.net/cinder/+bug/1489523 (filed by Matt, btw :-)

We filed a patch, but this will only come with Newton from what I see.

Cheers,
 Arne

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Delete cinder service

2016-09-02 Thread Arne Wiebalck
There is ‘cinder-manage db purge’ to delete entries which are marked as deleted.

I never tried it (and suggest to start with a copy of the db before touching 
the prod
ones). We usually do not purge databases unless we hit an issue.

Cheers,
 Arne


> On 02 Sep 2016, at 10:28, William Josefsson <william.josef...@gmail.com> 
> wrote:
> 
> Thanks everyone for your replies! I did a safe select first to make
> sure there was only one match. than I updated deleted=1 for that
> service which seem to work. Now 'cinder service-list' shows the right
> output.
> 
> I notice in DB 'volumes', there are plenty of old volume entries, long
> ago deleted, and they have 'deleted=1'. The host value, is the old
> host name that no longer exist.
> 
> Is there any cleanup of volumes entries with deleted=1, or is it
> normal these old entries lay around? thx will
> 
> 
> 
> On Fri, Sep 2, 2016 at 1:24 AM, Kris G. Lindgren <klindg...@godaddy.com> 
> wrote:
>> Just be careful with LIMIT x on your servers if you have replicated mysql 
>> databases.  At least under older versions of mysql this can lead to broken 
>> replication as the results of the query performed on the master and on the 
>> slave are not guaranteed to be the same.
>> 
>> https://dev.mysql.com/doc/refman/5.7/en/replication-features-limit.html
>> 
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>> 
>> On 9/1/16, 9:51 AM, "Nick Jones" <nick.jo...@datacentred.co.uk> wrote:
>> 
>> 
>>On 1 Sep 2016, at 15:36, Jonathan D. Proulx wrote:
>> 
>>> On Thu, Sep 01, 2016 at 04:25:25PM +0300, Vladimir Prokofev wrote:
>>> :I've used direct database update to achive this in Mitaka:
>>> :use cinder;
>>> :update services set deleted = '1' where ;
>>> 
>>> 
>>> I belive the official way is:
>>> 
>>> cinder-manage service remove  
>>> 
>>> Which probably more or less does the same thing...
>> 
>>Yep.  Both options basically require direct interaction with the
>>database as opposed to via a Cinder API call, but at least with
>>cinder-manage the scope for making a mistake is far more limited than
>>missing some qualifying clause off an UPDATE statement (limit 1 is your
>>friend!) ;)
>> 
>>—
>> 
>>-Nick
>> 
>>--
>>DataCentred Limited registered in England and Wales no. 05611763
>> 
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Arne Wiebalck
We have use cases in our cloud which require vCPU-to-NUMA_node pinning
to maximise the CPU performance available in the guests. From what we’ve
seen, there was no further improvement when the vCPUs were mapped
one-to-one to pCPUs (we did not study this in detail, though, as with the
NUMA node pinning the performance was sufficiently close to the physical
one).

To implement this, we specify the numa_nodes extra_spec for the corresponding
flavor and rely on nova’s placement policy.

HTH,
 Arne

—
Arne Wiebalck
CERN IT



> On 08 Jul 2016, at 19:22, Steve Gordon <sgor...@redhat.com> wrote:
> 
> - Original Message -
>> From: "Brent Troge" <brenttroge2...@gmail.com 
>> <mailto:brenttroge2...@gmail.com>>
>> To: openstack@lists.openstack.org <mailto:openstack@lists.openstack.org>
>> Sent: Friday, July 8, 2016 9:59:58 AM
>> Subject: [Openstack] vCPU -> pCPU MAPPING
>> 
>> context - high performance private cloud with cpu pinning
>> 
>> Is it possible to map vCPUs to specific pCPUs ?
>> Currently I see you can only direct which vCPUs are mapped to a specific
>> NUMA node
>> 
>> hw:numa_cpus.0=1,2,3,4
> 
> Just in addition to Jay's comment, the above does not do what I suspect you 
> think it does. The above tells Nova to expose vCPUs 1, 2, 3, and 4 in *guest* 
> NUMA node 0 when building the guest NUMA topology in the Libvirt XML. Nova 
> will endeavor to map these vCPUs to pCPUs on the same NUMA node on the host 
> as *each other* but that will not necessarily be NUMA node *0* on the host 
> depending on resource availability.
> 
> Thanks,
> 
> Steve
> 
>> However, to get even more granular, is it possible to create a flavor which
>> maps vCPU to specific pCPU within a numa node ?
>> 
>> Something like:
>> hw:numa_cpus.-=
>> 
>> hw:numa_cpus.0-1=1
>> hw:numa_cpus.0-2=2
>> 
>> 
>> Thanks!
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
> Post to : openstack@lists.openstack.org 
> <mailto:openstack@lists.openstack.org>
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] CPU pinning question

2015-12-15 Thread Arne Wiebalck
What was configured was pinning to a set and that is reflected, no? Or is that 
not referred to as “pinning”?
Anyway, for performance we didn’t see a difference between 1:1 pinning and 
confining (?) the vCPUs to a
a set as long as the instance is aware of the underlying NUMA topology.

Cheers,
 Arne


> On 15 Dec 2015, at 17:11, Chris Friesen <chris.frie...@windriver.com> wrote:
> 
> Actually no, I don't think that's right.  When pinning is enabled each vCPU 
> will be affined to a single host CPU.  What is showing below is what I would 
> expect if the instance was using non-dedicated CPUs.
> 
> To the original poster, you should be using
> 
> 'hw:cpu_policy': 'dedicated'
> 
> in your flavor extra-specs to enable CPU pinning.  And you should enable the 
> NUMATopologyFilter scheduler filter.
> 
> Chris
> 
> 
> 
> On 12/15/2015 09:23 AM, Arne Wiebalck wrote:
>> The pinning seems to have done what you asked for, but you probably
>> want to confine your vCPUs to NUMA nodes.
>> 
>> Cheers,
>>  Arne
>> 
>> 
>>> On 15 Dec 2015, at 16:12, Satish Patel <satish@gmail.com> wrote:
>>> 
>>> Sorry forgot to reply all :)
>>> 
>>> This is what i am getting
>>> 
>>> [root@compute-1 ~]# virsh vcpupin instance-0043
>>> VCPU: CPU Affinity
>>> --
>>>   0: 2-3,6-7
>>>   1: 2-3,6-7
>>> 
>>> 
>>> Following numa info
>>> 
>>> [root@compute-1 ~]# numactl --hardware
>>> available: 2 nodes (0-1)
>>> node 0 cpus: 0 3 5 6
>>> node 0 size: 2047 MB
>>> node 0 free: 270 MB
>>> node 1 cpus: 1 2 4 7
>>> node 1 size: 2038 MB
>>> node 1 free: 329 MB
>>> node distances:
>>> node   0   1
>>>  0:  10  20
>>>  1:  20  10
>>> 
>>> On Tue, Dec 15, 2015 at 8:36 AM, Arne Wiebalck <arne.wieba...@cern.ch> 
>>> wrote:
>>>> The pinning we set up goes indeed into the  block:
>>>> 
>>>> —>
>>>> 32
>>>>  
>>>>32768
>>>>
>>>>
>>>>…
>>>> <—
>>>> 
>>>> What does “virsh vcpupin ” give for your instance?
>>>> 
>>>> Cheers,
>>>> Arne
>>>> 
>>>> 
>>>>> On 15 Dec 2015, at 13:02, Satish Patel <satish@gmail.com> wrote:
>>>>> 
>>>>> I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64
>>>>> on CentOS7.1
>>>>> 
>>>>> I am trying to configure CPU pinning because my application is cpu
>>>>> hungry. this is what i did.
>>>>> 
>>>>> in /etc/nova/nova.conf
>>>>> 
>>>>> vcpu_pin_set=2,3,6,7
>>>>> 
>>>>> 
>>>>> I have created aggregated host with pinning=true and created flavor
>>>>> with pinning, after that when i start VM on Host following this i can
>>>>> see in guest
>>>>> 
>>>>> ...
>>>>> ...
>>>>> 2
>>>>> ...
>>>>> ...
>>>>> 
>>>>> But i am not seeing any  info.
>>>>> 
>>>>> Just want to make sure does my pinning working correctly or something
>>>>> is wrong. How do i verify my pinning config is correct?
>>>>> 
>>>>> ___
>>>>> Mailing list: 
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>> Post to : openstack@lists.openstack.org
>>>>> Unsubscribe : 
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> 
>>>> --
>>>> Arne Wiebalck
>>>> CERN IT
>>>> 
>> 
>> --
>> Arne Wiebalck
>> CERN IT
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] CPU pinning question

2015-12-15 Thread Arne Wiebalck
Thanks for clarifying the terminology, Chris, that’s helpful!

My two points about performance were:
- without overcommit, an instance confined in a NUMA node does not profit from 
1-to-1 pinning (at least from what we saw);
- an instance spanning multiple NUMA nodes needs to be aware of the topology 
(and our instances fulfil your condition for this).

Cheers,
 Arne


> On 15 Dec 2015, at 18:19, Chris Friesen <chris.frie...@windriver.com> wrote:
> 
> If you specify "vcpu_pin_set=2,3,6,7" in /etc/nova/nova.conf then nova will 
> limit the VMs to run on that subset of host CPUs.  This involves pinning from 
> libvirt, but isn't really considered a "dedicated" instance in nova.
> 
> By default, instances that can run on all of the allowed host CPUs are 
> considered "non-dedicated", "shared", or "floating" because the vCPUs are not 
> mapped one-to-one with a single host CPU.  For these instances the CPU 
> overcommit value determines how many vCPUs can run on a single host CPU.
> 
> If an instance specifies a cpu_policy of "dedicated", then each vCPU is 
> mapped one-to-one with a single host CPU.  This is known as a "pinned" or 
> "dedicated" instance.
> 
> The performance differences between the two come into play when there is 
> contention.  By default a non-dedicated vCPU can have 16x overcommit, so 
> there can be 16 vCPUs trying to run on a single host CPU.  In contrast, a 
> dedicated vCPU gets a whole host CPU all to itself.
> 
> As for the guest being aware of the underlying NUMA topology, this isn't 
> possible when the number of vCPUs in the instance isn't a multiple of the 
> number of hyperthreads in a host core.  The problem is that qemu doesn't have 
> a way to specify cores with varying numbers of threads.
> 
> Chris
> 
> On 12/15/2015 10:50 AM, Arne Wiebalck wrote:
>> What was configured was pinning to a set and that is reflected, no? Or is 
>> that not referred to as “pinning”?
>> Anyway, for performance we didn’t see a difference between 1:1 pinning and 
>> confining (?) the vCPUs to a
>> a set as long as the instance is aware of the underlying NUMA topology.
>> 
>> Cheers,
>>  Arne
>> 
>> 
>>> On 15 Dec 2015, at 17:11, Chris Friesen <chris.frie...@windriver.com> wrote:
>>> 
>>> Actually no, I don't think that's right.  When pinning is enabled each vCPU 
>>> will be affined to a single host CPU.  What is showing below is what I 
>>> would expect if the instance was using non-dedicated CPUs.
>>> 
>>> To the original poster, you should be using
>>> 
>>> 'hw:cpu_policy': 'dedicated'
>>> 
>>> in your flavor extra-specs to enable CPU pinning.  And you should enable 
>>> the NUMATopologyFilter scheduler filter.
>>> 
>>> Chris
>>> 
>>> 
>>> 
>>> On 12/15/2015 09:23 AM, Arne Wiebalck wrote:
>>>> The pinning seems to have done what you asked for, but you probably
>>>> want to confine your vCPUs to NUMA nodes.
>>>> 
>>>> Cheers,
>>>>  Arne
>>>> 
>>>> 
>>>>> On 15 Dec 2015, at 16:12, Satish Patel <satish@gmail.com> wrote:
>>>>> 
>>>>> Sorry forgot to reply all :)
>>>>> 
>>>>> This is what i am getting
>>>>> 
>>>>> [root@compute-1 ~]# virsh vcpupin instance-0043
>>>>> VCPU: CPU Affinity
>>>>> --
>>>>>   0: 2-3,6-7
>>>>>   1: 2-3,6-7
>>>>> 
>>>>> 
>>>>> Following numa info
>>>>> 
>>>>> [root@compute-1 ~]# numactl --hardware
>>>>> available: 2 nodes (0-1)
>>>>> node 0 cpus: 0 3 5 6
>>>>> node 0 size: 2047 MB
>>>>> node 0 free: 270 MB
>>>>> node 1 cpus: 1 2 4 7
>>>>> node 1 size: 2038 MB
>>>>> node 1 free: 329 MB
>>>>> node distances:
>>>>> node   0   1
>>>>>  0:  10  20
>>>>>  1:  20  10
>>>>> 
>>>>> On Tue, Dec 15, 2015 at 8:36 AM, Arne Wiebalck <arne.wieba...@cern.ch> 
>>>>> wrote:
>>>>>> The pinning we set up goes indeed into the  block:
>>>>>> 
>>>>>> —>
>>>>>> 32
>>>>>>  
>>>>>>32768
>>>>>>
>>>>>>
>>>>>>…
>>>>>> <—
>>>>&

Re: [Openstack] CPU pinning question

2015-12-15 Thread Arne Wiebalck
The pinning we set up goes indeed into the  block:

—>
32
  
32768


…
<—

What does “virsh vcpupin ” give for your instance?

Cheers,
 Arne


> On 15 Dec 2015, at 13:02, Satish Patel <satish@gmail.com> wrote:
> 
> I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64
> on CentOS7.1
> 
> I am trying to configure CPU pinning because my application is cpu
> hungry. this is what i did.
> 
> in /etc/nova/nova.conf
> 
> vcpu_pin_set=2,3,6,7
> 
> 
> I have created aggregated host with pinning=true and created flavor
> with pinning, after that when i start VM on Host following this i can
> see in guest
> 
> ...
> ...
> 2
> ...
> ...
> 
> But i am not seeing any  info.
> 
> Just want to make sure does my pinning working correctly or something
> is wrong. How do i verify my pinning config is correct?
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Arne Wiebalck
CERN IT

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] CPU pinning question

2015-12-15 Thread Arne Wiebalck
The pinning seems to have done what you asked for, but you probably
want to confine your vCPUs to NUMA nodes.

Cheers,
 Arne


> On 15 Dec 2015, at 16:12, Satish Patel <satish@gmail.com> wrote:
> 
> Sorry forgot to reply all :)
> 
> This is what i am getting
> 
> [root@compute-1 ~]# virsh vcpupin instance-0043
> VCPU: CPU Affinity
> --
>   0: 2-3,6-7
>   1: 2-3,6-7
> 
> 
> Following numa info
> 
> [root@compute-1 ~]# numactl --hardware
> available: 2 nodes (0-1)
> node 0 cpus: 0 3 5 6
> node 0 size: 2047 MB
> node 0 free: 270 MB
> node 1 cpus: 1 2 4 7
> node 1 size: 2038 MB
> node 1 free: 329 MB
> node distances:
> node   0   1
>  0:  10  20
>  1:  20  10
> 
> On Tue, Dec 15, 2015 at 8:36 AM, Arne Wiebalck <arne.wieba...@cern.ch> wrote:
>> The pinning we set up goes indeed into the  block:
>> 
>> —>
>> 32
>>  
>>32768
>>
>>
>>…
>> <—
>> 
>> What does “virsh vcpupin ” give for your instance?
>> 
>> Cheers,
>> Arne
>> 
>> 
>>> On 15 Dec 2015, at 13:02, Satish Patel <satish@gmail.com> wrote:
>>> 
>>> I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64
>>> on CentOS7.1
>>> 
>>> I am trying to configure CPU pinning because my application is cpu
>>> hungry. this is what i did.
>>> 
>>> in /etc/nova/nova.conf
>>> 
>>> vcpu_pin_set=2,3,6,7
>>> 
>>> 
>>> I have created aggregated host with pinning=true and created flavor
>>> with pinning, after that when i start VM on Host following this i can
>>> see in guest
>>> 
>>> ...
>>> ...
>>> 2
>>> ...
>>> ...
>>> 
>>> But i am not seeing any  info.
>>> 
>>> Just want to make sure does my pinning working correctly or something
>>> is wrong. How do i verify my pinning config is correct?
>>> 
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> --
>> Arne Wiebalck
>> CERN IT
>> 

--
Arne Wiebalck
CERN IT

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Running mixed stuff Juno & Kilo , Was: cinder-api with rbd driver ignores ceph.conf

2015-11-18 Thread Arne Wiebalck
We do run Cinder on dedicated controllers, though, so no mix of Kilo and Juno 
services on
the controllers.

Cheers,
 Arne


On 18 Nov 2015, at 13:19, Belmiro Moreira 
>
 wrote:

Hi Saverio,
we always upgrade one component at a time.
Cinder was one of the first components that we upgraded to kilo,
meaning that other components (glance, nova, ...) were running Juno.

We didn't have any problem with this setup.

Belmiro
CERN

On Tue, Nov 17, 2015 at 6:01 PM, Saverio Proto 
> wrote:
Hello there,

I need to quickly find a workaround to be able to use ceph object map
features for cinder volumes with rbd backend.

However, upgrading everything from Juno to Kilo will require a lot of
time for testing and updating all my puppet modules.

Do you think it is feasible to start updating just cinder to Kilo ?
Will it work with the rest of the Juno components ?

Has someone here experience in running mixed components between Juno and Kilo ?

thanks

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] CentOS 7 KVM and QEMU 2.+

2015-11-12 Thread Arne Wiebalck
Hi,

What about the CentOS Virt SIG’s repo at

http://mirror.centos.org/centos-7/7/virt/x86_64/kvm-common/

(and the testing repos at:
http://buildlogs.centos.org/centos/7/virt/x86_64/kvm-common/ )?

These contain newer versions of the qemu-* packages.

Cheers,
 Arne

—
Arne Wiebalck
CERN IT



> On 12 Nov 2015, at 17:54, Leslie-Alexandre DENIS <cont...@ladenis.fr> wrote:
> 
> Hello guys,
> 
> I'm struggling at finding a qemu(-kvm) version up-to-date for CentOS 7 with 
> official repositories
> and additional EPEL.
> 
> Currently the only package named qemu-kvm in these repositories is 
> *qemu-kvm-1.5.3-86.el7_1.8.x86_64*, which is a bit outdated.
> 
> As what I understand QEMU merged the forked qemu-kvm into the base code since 
> 1.3 and the Kernel is shipped with KVM module. Theoretically we can just 
> install qemu 2.+ and load KVM in order to use nova-compute with KVM 
> acceleration, right ?
> 
> The problem is that the packages openstack-nova{-compute} have a dependencies 
> with qemu-kvm. For example Fedora ships qemu-kvm as a subpackage of qemu and 
> it appears to be the same in fact, not the forked project [1].
> 
> 
> 
> In a word, guys how do you manage to have a QEMU v2.+ with latest libvirt on 
> your CentOS computes nodes ?
> Is somebody using the qemu packages from oVirt ? [2]
> 
> 
> Thanks,
> See you
> 
> 
> ---
> 
> [1] https://apps.fedoraproject.org/packages/qemu-kvm
> [2] http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/x86_64/
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Ceph] Different storage types on different disk types

2015-10-26 Thread Arne Wiebalck
Hi Adam,

We provide various volume types which differ in

- performance (implemented via different IOPS QoS specifications, not via 
different hardware),
- service quality (e.g. volumes on a Ceph pool that is on Diesel-backed 
servers, so via separate hardware),
- a combination of the two,
- geographical location (with a second Ceph instance in another data centre).

I think it is absolutely realistic/manageable to use the same Ceph cluster for 
various use cases.

HTH,
 Arne

> On 26 Oct 2015, at 14:02, Adam Lawson  wrote:
> 
> Has anyone deployed Ceph and accommodate different disk/performance 
> requirements? I.e. Saving ephemeral storage and boot volumes on SSD and less 
> important content such as object storage, glance images on SATA or something 
> along those lines?
> 
> Just looking at it's realistic (or discover best practice) on using the same 
> Ceph cluster for both use cases...
> 
> //adam
> 
> Adam Lawson
> 
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder Juno to Kilo upgrade and DB encodings

2015-09-18 Thread Arne Wiebalck

On 17 Sep 2015, at 18:11, Mathieu Gagné 
<mga...@internap.com<mailto:mga...@internap.com>> wrote:

On 2015-09-17 4:06 AM, Arne Wiebalck wrote:
Hi,

During our Cinder upgrade on CentOS7 from Juno to Kilo, we ran into this bug:
https://bugs.launchpad.net/cinder/+bug/1455726

As there is no fix available from what I see, what we came up with as a 
“solution”
is to explicitly set the character and the collation in all existing tables in 
the database
before the upgrade:

—>
alter database cinder CHARACTER SET utf8 COLLATE utf8_unicode_ci;
SET foreign_key_checks = 0;
ALTER TABLE `backups` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `cgsnapshots` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `consistencygroups` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `encryption` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `iscsi_targets` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `migrate_version` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quality_of_service_specs` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quota_classes` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quota_usages` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quotas` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `reservations` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `services` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `snapshot_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `snapshots` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `transfers` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `volume_admin_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_glance_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_type_extra_specs` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_types` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volumes` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
SET foreign_key_checks = 1;
<—

Note that in our case the databases default character set was already utf8 
everywhere, while the collation
was utf8_general_ci. With that conversion the upgrade seems to work fine in our 
tests.

Before we retry the upgrade: do if people here think that this a reasonable 
approach or will this cause
other issues? Are there alternative approaches?


In our case, we ran this command before upgrading:

ALTER DATABASE cinder CHARACTER SET utf8 COLLATE utf8_general_ci;

See this thread about the same problem:
http://lists.openstack.org/pipermail/openstack/2015-August/013599.html

And the proposed solution:
http://lists.openstack.org/pipermail/openstack/2015-August/013601.html

I suspect that people using Puppet to deploy OpenStack are encountering
this issue due to this change:
https://review.openstack.org/#/c/175991/

We thought this change wouldn't affect anyone but it looks it does.


Thanks a lot for the pointers, Mathieu. I’ll give that a try.

Cheers,
 Arne

—
Arne Wiebalck
CERN IT






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Cinder Juno to Kilo upgrade and DB encodings

2015-09-17 Thread Arne Wiebalck
Hi,

During our Cinder upgrade on CentOS7 from Juno to Kilo, we ran into this bug:
https://bugs.launchpad.net/cinder/+bug/1455726

As there is no fix available from what I see, what we came up with as a 
“solution”
is to explicitly set the character and the collation in all existing tables in 
the database
before the upgrade:

—>
alter database cinder CHARACTER SET utf8 COLLATE utf8_unicode_ci;
SET foreign_key_checks = 0;
ALTER TABLE `backups` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `cgsnapshots` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `consistencygroups` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `encryption` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `iscsi_targets` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `migrate_version` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quality_of_service_specs` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quota_classes` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quota_usages` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `quotas` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `reservations` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `services` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `snapshot_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `snapshots` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `transfers` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE `volume_admin_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_glance_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_metadata` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_type_extra_specs` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volume_types` CONVERT TO CHARACTER SET utf8 COLLATE 
utf8_unicode_ci;
ALTER TABLE `volumes` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
SET foreign_key_checks = 1;
<—

Note that in our case the databases default character set was already utf8 
everywhere, while the collation
was utf8_general_ci. With that conversion the upgrade seems to work fine in our 
tests.

Before we retry the upgrade: do if people here think that this a reasonable 
approach or will this cause
other issues? Are there alternative approaches?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack] [Nova][Cinder] Live-migration omits volume QoS?

2015-06-29 Thread Arne Wiebalck
Hi,

When live-migrating an instance with an attached volume, it seems that the QoS 
associated with
the volume type is not applied on the target host. The correct iotune block is 
in the target XML
and a reboot fixes things, but I guess that’s not the idea. Is that a known 
issue? Anything I missed
to configure?

TIA,
 Arne

—
Arne Wiebalck
CERN IT
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-12 Thread Arne Wiebalck
Here’s Dan’s answer for the exact procedure (he replied, but it bounced):


We have two clusters with mons behind two DNS aliases:

 cephmon.cern.ch: production cluster with five mons A, B, C, D, E

 cephmond.cern.ch: testing cluster with five mons X, Y, Z


The procedure was:

 1. Stop mon on host X. Remove from DNS alias cephmond. Remove from mon map.

 2. Stop mon on host A. Remove from DNS alias cephmon. Remove from mon map.

 3. Add mon on host X to cephmon cluster. mkfs the new mon, start the ceph-mon 
process; after quorum add it to the cephmon alias.

 4. Add mon on host A to cephmond cluster. mkfs the new mon, start the ceph-mon 
process; after quorum add it to the cephmond alias.

 5. Repeat for B/Y and C/Z.



In the end, three of the hosts which were previously running cephmon mon’s were 
then running cephmond mon’s. Hence when a client comes with an config pointing 
to an old mon, they get authentication denied and the client stops there — it 
doesn’t try the next IP in the list of mons. As a workaround we moved all the 
cephmond mon’s to port 6790 — this way the Cinder clients failover to one of 
the two cephmon mon’s which have not changed.



Cheers, Dan



On 12 May 2015, at 01:46, Josh Durgin jdur...@redhat.com wrote:

 On 05/08/2015 12:41 AM, Arne Wiebalck wrote:
 Hi Josh,
 
 In our case adding the monitor hostnames (alias) would have made only a
 slight difference:
 as we moved the servers to another cluster, the client received an
 authorisation failure rather
 than a connection failure and did not try to fail over to the next IP in
 the list. So, adding the
 alias to list would have improved the chances to hit a good monitor, but
 it would not have
 eliminated the problem.
 
 Could you provide more details on the procedure you followed to move
 between clusters? I missed the separate clusters part initially, and
 thought you were simply replacing the monitor nodes.
 
 I’m not sure storing IPs in the nova database is a good idea in gerenal.
 Replacing (not adding)
 these by the hostnames is probably better. Another approach may be to
 generate this part of
 connection_info (and hence the XML) dynamically from the local ceph.conf
 when the connection
 is created. I think a mechanism like this is for instance used to select
 a free port for the vnc
 console when the instance is started.
 
 Yes, with different clusters only using the hostnames is definitely
 the way to go. I agree that keeping the information in nova's db may
 not be the best idea. It is handy to allow nova to use different
 clusters from cinder, so I'd prefer not generating the connection info
 locally. The qos_specs are also part of connection_info, and if changed
 they would have a similar problem of not applying the new value to
 existing instances, even after reboot. Maybe nova should simply refresh
 the connection info each time it uses a volume.
 
 Josh
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-08 Thread Arne Wiebalck
Hi Josh,

In our case adding the monitor hostnames (alias) would have made only a slight 
difference:
as we moved the servers to another cluster, the client received an 
authorisation failure rather
than a connection failure and did not try to fail over to the next IP in the 
list. So, adding the
alias to list would have improved the chances to hit a good monitor, but it 
would not have
eliminated the problem.

I’m not sure storing IPs in the nova database is a good idea in gerenal. 
Replacing (not adding)
these by the hostnames is probably better. Another approach may be to generate 
this part of
connection_info (and hence the XML) dynamically from the local ceph.conf when 
the connection
is created. I think a mechanism like this is for instance used to select a free 
port for the vnc
console when the instance is started.

Cheers,
 Arne

—
Arne Wiebalck
CERN IT


On 08 May 2015, at 05:37, David Medberry 
openst...@medberry.netmailto:openst...@medberry.net wrote:

Josh,

Certainly in our case the the monitor hosts (in addition to IPs) would have 
made a difference.

On Thu, May 7, 2015 at 3:21 PM, Josh Durgin 
jdur...@redhat.commailto:jdur...@redhat.com wrote:
Hey folks, thanks for filing a bug for this:

https://bugs.launchpad.net/cinder/+bug/1452641

Nova stores the volume connection info in its db, so updating that
would be a workaround to allow restart/migration of vms to work.
Otherwise running vms shouldn't be affected, since they'll notice any
new or deleted monitors through their existing connection to the
monitor cluster.

Perhaps the most general way to fix this would be for cinder to return
any monitor hosts listed in ceph.conf (as they are listed, so they may
be hostnames or ips) in addition to the ips from the current monmap
(the current behavior).

That way an out of date ceph.conf is less likely to cause problems,
and multiple clusters could still be used with the same nova node.

Josh

On 05/06/2015 12:46 PM, David Medberry wrote:
Hi Arne,

We've had this EXACT same issue.

I don't know of a way to force an update as you are basically pulling
the rug out from under a running instance. I don't know if it is
possible/feasible to update the virsh xml in place and then migrate to
get it to actually use that data. (I think we tried that to no avail.)
dumpxml=massage cephmons=import xml

If you find a way, let me know, and that's part of the reason I'm
replying so that I stay on this thread. NOTE: We did this on icehouse.
Haven't tried since upgrading to Juno but I don't note any change
therein that would mitigate this. So I'm guessing Liberty/post-Liberty
for a real fix.



On Wed, May 6, 2015 at 12:57 PM, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch
mailto:arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:

Hi,

As we swapped a fraction of our Ceph mon servers between the
pre-production and production cluster
— something we considered to be transparent as the Ceph config
points to the mon alias—, we ended
up in a situation where VMs with volumes attached were not able to
boot (with a probability that matched
the fraction of the servers moved between the Ceph instances).

We found that the reason for this is the connection_info in
block_device_mapping which contains the
IP adresses of the mon servers as extracted by the rbd driver in
initialize_connection() at the moment
when the connection is established. From what we see, however, this
information is not updated as long
as the connection exists, and will hence be re-applied without
checking even when the XML is recreated.

The idea to extract the mon servers by IP from the mon map was
probably to get all mon servers (rather
than only one from a load-balancer or an alias), but while our
current scenario may be special, we will face
a similar problem the day the Ceph mons need to be replaced. And
that makes it a more general issue.

For our current problem:
Is there a user-transparent way to force an update of that
connection information? (Apart from fiddling
with the database entries, of course.)

For the general issue:
Would it be possible to simply use the information from the
ceph.conf file directly (an alias in our case)
throughout the whole stack to avoid hard-coding IPs that will be
obsolete one day?

Thanks!
  Arne

—
Arne Wiebalck
CERN IT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe

http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [Cinder] Static Ceph mon connection info prevents VM restart

2015-05-06 Thread Arne Wiebalck
Hi,

As we swapped a fraction of our Ceph mon servers between the pre-production and 
production cluster
— something we considered to be transparent as the Ceph config points to the 
mon alias—, we ended
up in a situation where VMs with volumes attached were not able to boot (with a 
probability that matched
the fraction of the servers moved between the Ceph instances).

We found that the reason for this is the connection_info in 
block_device_mapping which contains the
IP adresses of the mon servers as extracted by the rbd driver in 
initialize_connection() at the moment
when the connection is established. From what we see, however, this information 
is not updated as long
as the connection exists, and will hence be re-applied without checking even 
when the XML is recreated. 

The idea to extract the mon servers by IP from the mon map was probably to get 
all mon servers (rather
than only one from a load-balancer or an alias), but while our current scenario 
may be special, we will face
a similar problem the day the Ceph mons need to be replaced. And that makes it 
a more general issue.

For our current problem:
Is there a user-transparent way to force an update of that connection 
information? (Apart from fiddling
with the database entries, of course.)

For the general issue:
Would it be possible to simply use the information from the ceph.conf file 
directly (an alias in our case)
throughout the whole stack to avoid hard-coding IPs that will be obsolete one 
day?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Arne Wiebalck
Hi,

The fact that volume requests (in particular deletions) are coupled with 
certain Cinder hosts is not ideal from an operational perspective:
if the node has meanwhile disappeared, e.g. retired, the deletion gets stuck 
and can only be unblocked by changing the database. Some
people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.

From what I see, even for backends that would support it (such as Ceph), Cinder 
currently does not provide means to ensure that any of
the hosts capable of performing a volume operation would be assigned the 
request in case the original/desired one is no more available,
right?

If that is correct, how about changing the scheduling of delete operation to 
use the same logic as create operations, that is pick any of the
available hosts, rather than the one which created a volume in the first place 
(for backends where that is possible, of course)?

Thanks!
 Arne 

—
Arne Wiebalck
CERN IT
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Arne Wiebalck
Hmm. Not sure how widespread installations with multiple Ceph backends are 
where the
Cinder hosts have access to only one of the backends (which is what you assume, 
right?)
But, yes, if the volume type names are also the same (is that also needed for 
this to be a
problem?), this will be an issue ...

So, how about providing the information the scheduler does not have by 
introducing an
additional tag to identify ‘equivalent’ backends, similar to the way some 
people already
use the ‘host’ option?

Thanks!
 Arne


On 08 Jan 2015, at 15:11, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

The problem is that the scheduler doesn't currently have enough info to know 
which backends are 'equivalent' and which aren't. e.g. If you have 2 ceph 
clusters as cinder backends, they are indistinguishable from each other.

On 8 January 2015 at 12:14, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:
Hi,

The fact that volume requests (in particular deletions) are coupled with 
certain Cinder hosts is not ideal from an operational perspective:
if the node has meanwhile disappeared, e.g. retired, the deletion gets stuck 
and can only be unblocked by changing the database. Some
people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.

From what I see, even for backends that would support it (such as Ceph), Cinder 
currently does not provide means to ensure that any of
the hosts capable of performing a volume operation would be assigned the 
request in case the original/desired one is no more available,
right?

If that is correct, how about changing the scheduling of delete operation to 
use the same logic as create operations, that is pick any of the
available hosts, rather than the one which created a volume in the first place 
(for backends where that is possible, of course)?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] volume / host coupling

2015-01-08 Thread Arne Wiebalck
Hi Jordan,

As Duncan pointed out there may be issues if you have multiple backends
and indistinguishable nodes (which you could  probably avoid by separating
the hosts per backend and use different “host” flags for each set).

But also if you have only one backend: the “host flag will enter the ‘services'
table and render the host column more or less useless. I imagine this has an
impact on things using the services table, such as “cinder-manage” (how
does your “cinder-manage service list” output look like? :-), and it may make it
harder to tell if the individual services are doing OK, or to control them.

I haven’t run Cinder with identical “host” flags in production, but I imagine
there may be other areas which are not happy about indistinguishable hosts.

Arne


On 08 Jan 2015, at 16:50, Jordan Pittier 
jordan.pitt...@scality.commailto:jordan.pitt...@scality.com wrote:

Hi,
Some people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.
I use shared storage mounted on several cinder-volume nodes, with host flag 
set the same everywhere. Never ran into problems so far. Could you elaborate on 
this creates problems in other places please ?

Thanks !
Jordan

On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:
Hmm. Not sure how widespread installations with multiple Ceph backends are 
where the
Cinder hosts have access to only one of the backends (which is what you assume, 
right?)
But, yes, if the volume type names are also the same (is that also needed for 
this to be a
problem?), this will be an issue ...

So, how about providing the information the scheduler does not have by 
introducing an
additional tag to identify ‘equivalent’ backends, similar to the way some 
people already
use the ‘host’ option?

Thanks!
 Arne


On 08 Jan 2015, at 15:11, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

The problem is that the scheduler doesn't currently have enough info to know 
which backends are 'equivalent' and which aren't. e.g. If you have 2 ceph 
clusters as cinder backends, they are indistinguishable from each other.

On 8 January 2015 at 12:14, Arne Wiebalck 
arne.wieba...@cern.chmailto:arne.wieba...@cern.ch wrote:
Hi,

The fact that volume requests (in particular deletions) are coupled with 
certain Cinder hosts is not ideal from an operational perspective:
if the node has meanwhile disappeared, e.g. retired, the deletion gets stuck 
and can only be unblocked by changing the database. Some
people apparently use the ‘host’ option in cinder.conf to make the hosts 
indistinguishable, but this creates problems in other places.

From what I see, even for backends that would support it (such as Ceph), Cinder 
currently does not provide means to ensure that any of
the hosts capable of performing a volume operation would be assigned the 
request in case the original/desired one is no more available,
right?

If that is correct, how about changing the scheduling of delete operation to 
use the same logic as create operations, that is pick any of the
available hosts, rather than the one which created a volume in the first place 
(for backends where that is possible, of course)?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [Cinder] volume / host relation

2015-01-07 Thread Arne Wiebalck
Hi,

Will a Cinder volume creation request ever timeout and be rescheduled in case 
the host with the volume service it has been scheduled to is not consuming the 
corresponding message?

Similarly: if the host the volume has been created on and to which later the 
deletion request is scheduled has disappeared (e.g. meanwhile retired), will 
the scheduler try to schedule to another host?

From what I see, the answer to both of these questions seems to be ’no'. Things 
can get stuck in these scenarios and can only be unblocked by resurrecting the 
down host or by manually changing the Cinder database.

Is my understanding correct?

Is there a way to tag hosts so that any of my Cinder hosts can pick up the 
creation (and in particular deletion) message? I tried with the “host” 
parameter in cinder.conf which seems to “work, but is probably not meant for 
this, in particular as it touches the services database and makes the hosts 
indistinguishable
(which in turn breaks cinder-manage).

How do people deal with this issue?

Thanks!
 Arne

—
Arne Wiebalck
CERN IT



 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] cinder service-delete?

2014-12-18 Thread Arne Wiebalck
Hi,

When migrating our Cinder service to new hosts, I noticed that there seems
to be no “cinder service-delete” subcommand (RDO Icehouse  Juno).

I couldn’t find a bug or blueprint entry for this … is that worth opening one or
did I miss something?

Thanks,
 Arne

—
Arne Wiebalck
CERN IT



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators