Re: [Openstack-operators] Please give your opinion about "openstack server migrate" command.

2017-02-16 Thread Marcus Furlong
On 17 February 2017 at 17:05, Marcus Furlong  wrote:
> On 17 February 2017 at 16:47, Rikimaru Honjo
>  wrote:
>> Hi all,
>>
>> I found and reported a unkind behavior of "openstack server migrate" command
>> when I maintained my environment.[1]
>> But, I'm wondering which solution is better.
>> Do you have opinions about following my solutions by operating point of
>> view?
>> I will commit a patch according to your opinions if those are gotten.
>>
>> [1]https://bugs.launchpad.net/python-openstackclient/+bug/1662755
>> ---
>> [Actual]
>> If user run "openstack server migrate --block-migration ",
>> openstack client call Cold migration API.
>> "--block migration" option will be ignored if user don't specify "--live".
>>
>> But, IMO, this is unkindly.
>> This cause unexpected operation for operator.
>
> +1 This has confused/annoyed me before.

And having said that, the nova client itself also has some confusing
verbs which require explanation:

http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/

>>
>> P.S.
>> "--shared-migration" option has same issue.
>
> For the shared migration case, there is also this bug:
>
>https://bugs.launchpad.net/nova/+bug/1459782
>
> which, if fixed/implemented would negate the need for
> --shared-migration? And would fix also "nova resize" on shared
> storage.
>
> Cheers,
> Marcus.

-- 
Marcus Furlong

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Please give your opinion about "openstack server migrate" command.

2017-02-16 Thread Marcus Furlong
On 17 February 2017 at 16:47, Rikimaru Honjo
 wrote:
> Hi all,
>
> I found and reported a unkind behavior of "openstack server migrate" command
> when I maintained my environment.[1]
> But, I'm wondering which solution is better.
> Do you have opinions about following my solutions by operating point of
> view?
> I will commit a patch according to your opinions if those are gotten.
>
> [1]https://bugs.launchpad.net/python-openstackclient/+bug/1662755
> ---
> [Actual]
> If user run "openstack server migrate --block-migration ",
> openstack client call Cold migration API.
> "--block migration" option will be ignored if user don't specify "--live".
>
> But, IMO, this is unkindly.
> This cause unexpected operation for operator.

+1 This has confused/annoyed me before.

>
> P.S.
> "--shared-migration" option has same issue.

For the shared migration case, there is also this bug:

   https://bugs.launchpad.net/nova/+bug/1459782

which, if fixed/implemented would negate the need for
--shared-migration? And would fix also "nova resize" on shared
storage.

Cheers,
Marcus.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Can Not authenticate Neutron node

2017-02-16 Thread khansa A. Mohamed
Hi all ,,

Any one can help me with the following error ?
"An auth plugin is required to fetch a token"
When run neutron net-list in controller node .
Note : We have deplyed VIO.3 ( Vmware Integrated Openstack )
I've checked the configuration of  keystone authentication details in 
/etc/neutron/neutron.conf

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ceph vs gluster for block

2017-02-16 Thread Alex Hübner
Gluster for block storage is definitely not a good choice, specially for
VMs and OpenStack in general. Also, there are rumors all over the place
that RedHat will start to "phase out" Gluster in favor of CephFS, the "last
frontier" of the so-called "Unicorn Storage" (Ceph does everything). But
when it comes to block, there's no better choice than Ceph for every-single
scenario I could think off.

[]'s
Hubner

On Thu, Feb 16, 2017 at 4:39 PM, Mike Smith  wrote:

> Same experience here.  Gluster ‘failover’ time was an issue for as well
> (rebooting one of the Gluster nodes caused unacceptable locking/timeout for
> a period of time).  Ceph has worked well for us for both nova-ephemeral and
> cinder volume as well as Glance.  Just make sure you stay well ahead of
> running low on disk space!  You never want to run low on a Ceph cluster
> because it will write lock until you add more disk/OSDs
>
> Mike Smith
> Lead Cloud Systems Architect
> Overstock.com 
>
>
>
> On Feb 16, 2017, at 11:30 AM, Jonathan Abdiel Gonzalez Valdebenito <
> jonathan.abd...@gmail.com> wrote:
>
> Hi Vahric!
>
> We tested GlusterFS a few years ago and the latency was high, poors IOPs
> and every node with a high cpu usage, well that was a few years ago.
>
> We ended up after lot of tests using fio with Ceph cluster, so my advice
> it's use Ceph Cluster without doubts
>
> Regards,
>
> On Thu, Feb 16, 2017 at 1:32 PM Vahric Muhtaryan 
> wrote:
>
>> Hello All ,
>>
>> For a long time we are testing Ceph and today we also want to test
>> GlusterFS
>>
>> Interesting thing is maybe with single client we can not get IOPS what we
>> get from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random
>> write and gluster gave us 15-17K  )
>> But interesting thing when add additional client to test its get same
>> IOPS with first client means overall performance is doubled  , couldn’t
>> test more client but also interesting things is glusterfs do not use/eat
>> CPU like Ceph , a few percent of CPU is used.
>>
>> I would like to ask with Openstack , anybody use GlusterFS for instance
>> workload ?
>> Anybody used both of them in production and can compare ? Or share
>> experience ?
>>
>> Regards
>> Vahric Muhtaryan
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ceph vs gluster for block

2017-02-16 Thread Mike Smith
Same experience here.  Gluster ‘failover’ time was an issue for as well 
(rebooting one of the Gluster nodes caused unacceptable locking/timeout for a 
period of time).  Ceph has worked well for us for both nova-ephemeral and 
cinder volume as well as Glance.  Just make sure you stay well ahead of running 
low on disk space!  You never want to run low on a Ceph cluster because it will 
write lock until you add more disk/OSDs

Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Feb 16, 2017, at 11:30 AM, Jonathan Abdiel Gonzalez Valdebenito 
> wrote:

Hi Vahric!

We tested GlusterFS a few years ago and the latency was high, poors IOPs and 
every node with a high cpu usage, well that was a few years ago.

We ended up after lot of tests using fio with Ceph cluster, so my advice it's 
use Ceph Cluster without doubts

Regards,

On Thu, Feb 16, 2017 at 1:32 PM Vahric Muhtaryan 
> wrote:
Hello All ,

For a long time we are testing Ceph and today we also want to test GlusterFS

Interesting thing is maybe with single client we can not get IOPS what we get 
from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random write and 
gluster gave us 15-17K  )
But interesting thing when add additional client to test its get same IOPS with 
first client means overall performance is doubled  , couldn’t test more client 
but also interesting things is glusterfs do not use/eat CPU like Ceph , a few 
percent of CPU is used.

I would like to ask with Openstack , anybody use GlusterFS for instance 
workload ?
Anybody used both of them in production and can compare ? Or share experience ?

Regards
Vahric Muhtaryan
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ceph vs gluster for block

2017-02-16 Thread Vahric Muhtaryan
Hello All , 

For a long time we are testing Ceph and today we also want to test GlusterFS

Interesting thing is maybe with single client we can not get IOPS what we
get from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random
write and gluster gave us 15-17K  )
But interesting thing when add additional client to test its get same IOPS
with first client means overall performance is doubled  , couldn¹t test more
client but also interesting things is glusterfs do not use/eat CPU like Ceph
, a few percent of CPU is used.

I would like to ask with Openstack , anybody use GlusterFS for instance
workload ? 
Anybody used both of them in production and can compare ? Or share
experience ? 

Regards
Vahric Muhtaryan


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Openstack Ceph Backend and Performance Information Sharing

2017-02-16 Thread Vahric Muhtaryan
Hello All , 

For a long time we are testing Ceph from Firefly to Kraken , tried to
optimise many things which are very very common I guess like test tcmalloc
version 2.1 , 2,4 , jemalloc , setting debugs 0/0 , op_tracker and such
others and I believe with out hardware we almost reach to end of the road.

Some vendor tests mixed us a lot like samsung
http://www.samsung.com/semiconductor/support/tools-utilities/All-Flash-Array
-Reference-Design/downloads/Samsung_NVMe_SSDs_and_Red_Hat_Ceph_Storage_CS_20
160712.pdf  , DELL Dell PowerEdge R730xd Performance and Sizing Guide for
Red Hat Š 

and from intel 
http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2015/201508
13_S303E_Zhang.pdf

At the end using 3 replica (Actually most of vendors are testing with 2 but
I believe that its very very wrong way to do because when some of failure
happen you should wait 300 sec which is configurable but from blogs we
understaood that sometimes OSDs can be down and up again because of that I
believe very important to set related number but we do not want instances
freeze )  with config below with 4K , random and fully write only .

I red a lot about OSD and OSD process eating huge CPU , yes it is and we are
very well know that we couldn¹t get total of iOPS capacity of each raw SSD
drives.

My question is , can you pls share almost same or closer config or any
config test or production results ? Key is write, not %70 of read % 30 write
or full read things Š

Hardware :

6 x Node 
Each Node  Have : 
2 Socker CPU 1.8 GHZ each and total 16 core
3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
Raid Cards Configured Raid 0
We did not see any performance different with JBOD mode of raid card because
of that continued with raid 0
Also raid card write back cache is used because its adding extra IOPS too !

Achieved IOPS : 35 K (Single Client)
We tested up to 10 Clients which ceph fairly share this usage like almost 4K
for each 

Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
--gtod_reduce=1 --name=test --filename=test --bs=4k ‹iodepth=256 --size=1G
--numjobs=8 --readwrite=randwrite ‹group_reporting


Regards
Vahric Muhtaryan


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Large Deployment Team] Meeting today at 14:00 UTC

2017-02-16 Thread Matt Van Winkle
Hey LDTers,

Snuck up on us, but our Feb meeting is later today.  See you all in 
#openstack-operators.

Thanks!
VW

Sent from my iPhone
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Instances are not creating after adding 3 additional nova nodes

2017-02-16 Thread Saverio Proto
The three new compute nodes that you added are empty, so most likely
the new instances are scheduled to those three (3 attempts) and
something goes wrong.

with admin rights do:
openstack server show uuid

this should give you the info about the compute node where the
instance was scheduled. Check the nova-compute.log of that one, you
should find some debug info.

Saverio


2017-02-16 8:40 GMT+01:00 Anwar Durrani :
> Hi team,
>
> I am running Kilo setup with 1 Controller and 7 nova nodes, i have added 3
> additional nova nodes, where i can see, in system information under compute
> nodes that nodes are up and running, but when i am trying to launch instance
> then it is prompting below error :
>
> Error: Failed to perform requested operation on instance "test", the
> instance has an error status: Please try again later [Error: No valid host
> was found. Exceeded max scheduling attempts 3 for instance
> 969e71d4-845a-40da-be2a-d4f2619cbc68. Last exception: [u'Traceback (most
> recent call last):\n', u' File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2248, in
> _do].
>
>
> Thanks in advance.
>
>
> --
> Thanks & regards,
> Anwar M. Durrani
> +91-9923205011
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators