Re: [Openstack-operators] ElasticSearch on OpenStack

2016-09-03 Thread Tim Bell
Thanks. How’s the storage handled ?

We’re seeing slow I/O on local storage (which is also limited on space) and 
latencies with Ceph for block storage.

Tim

From:  on behalf of David Medberry 
Date: Friday 2 September 2016 at 22:18
To: Tim Bell 
Cc: openstack-operators 
Subject: Re: [Openstack-operators] ElasticSearch on OpenStack

Nathan: The page at 
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html 
gives you good advice on a maximum size for the elasticsearch VM's memory.

Nathan: suggest you pick a flavor with 64GB RAM or less, then base other sizing 
things off of that (i.e. choose a flavor with 64GB of RAM and as many CPUs as 
possible for that RAM allocation, then base disk size on testing of your use 
case)

Nathan: give java heap 30GB, and leave the rest of the memory to the OS 
filesystem cache so that Lucene can make best use of it.

Nathan: that's mostly it for tuning. elasticsearch publishes many other docs 
for tuning recommendations, but there isn't anything specific to openstack 
besides the flavor choice. i personally chose CPU size (8CPU) such that all 
vCPUs for each VM would fit on a single NUMA node, which is a best practice for 
ESXi but not sure if it applies to KVM.

(resending for clarity)

On Fri, Sep 2, 2016 at 6:46 AM, David Medberry 
mailto:openst...@medberry.net>> wrote:
Hey Tim,
We've just started this effort. I'll see if the guy running the service can 
comment today.

On Fri, Sep 2, 2016 at 6:36 AM, Tim Bell 
mailto:tim.b...@cern.ch>> wrote:

Has anyone had experience running ElasticSearch on top of OpenStack VMs ?

Are there any tuning recommendations ?

Thanks
Tim

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Delete cinder service

2016-09-03 Thread William Josefsson
Thanks for sharing the blog post nick, it definitely came timely!
thankfully my cluster is still fairly small, but this seem to
definitely be something one wanna keep under observation. thx will

On Fri, Sep 2, 2016 at 5:50 PM, Nick Jones  wrote:
> On 2 Sep 2016, at 9:28, William Josefsson wrote:
>
> [..]
>
>> Is there any cleanup of volumes entries with deleted=1, or is it
>> normal these old entries lay around? thx will
>
>
> There’s a timely blog post from Matt Fischer on exactly that subject:
>
> http://www.mattfischer.com/blog/?p=744
>
> His comment regarding the Cinder DB suggests that this process was broken in
> Liberty, however.  Would be good to have confirmation that it’s been
> rectified in Mitaka.
>
> --
>
> -Nick
>
> --
> DataCentred Limited registered in England and Wales no. 05611763

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Converged infrastructure

2016-09-03 Thread William Josefsson
Hi matt, the hardware depends on your workload requirements,
especially when we are talking latency. I do Dell PE R630s for
Controllers, and R730 for Compute/Ceph. The latter got 8x400G Intel
S3610+18xHitachi 1.8T SAS. Depending on the workload sensitivity if
you have DBs and long distance between servers, you probably wanna
look at fiber NICs and Switches, mellanox is popular but doesn't come
for free. I do pure 6x10G Intel ixgbe in the 730s, all bonded LACP to
Arista 7050X copper, total capacity per bond is 20Gbit (2x10G active
active). I will test a pure flash array and see how it works too.

If you have some very large flash array, you may wanna consider single
socket. You can watch this video where they run DBs etc., on pure
flash arrays and give some advice
https://www.youtube.com/watch?v=OqlC7S3cUKs . thx will

On Wed, Aug 31, 2016 at 8:01 PM, Matt Jarvis
 wrote:
> Time once again to dredge this topic up and see what the wider operators
> community thinks this time :) There were a fair amount of summit submissions
> for Barcelona talking about converged and hyper-converged infrastructure, it
> seems to be the topic de jour from vendors at the minute despite feeling
> like we've been round this before with Nebula, Piston Cloud etc.
>
> Like a lot of others we run Ceph, and we absolutely don't converge our
> storage and compute nodes for a variety of performance and management
> related reasons. In our experience, the hardware and tuning characteristics
> of both types of nodes are pretty different, in any kind of recovery
> scenarios Ceph eats memory, and it feels like creating a SPOF.
>
> Having said that, with pure SSD clusters becoming more common, some of those
> issues may well be mitigated, so is anyone doing this in production now ? If
> so, what does your hardware platform look like, and are there issues with
> these kinds of architectures ?
>
> Matt
>
> DataCentred Limited registered in England and Wales no. 05611763
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Converged infrastructure

2016-09-03 Thread Matt Jarvis
Thanks William ! I'm really interested in any contention issues between the
compute workload and the storage workload in converged nodes, and how folks
who are doing this are managing that.

On 3 September 2016 at 13:21, William Josefsson 
wrote:

> Hi matt, the hardware depends on your workload requirements,
> especially when we are talking latency. I do Dell PE R630s for
> Controllers, and R730 for Compute/Ceph. The latter got 8x400G Intel
> S3610+18xHitachi 1.8T SAS. Depending on the workload sensitivity if
> you have DBs and long distance between servers, you probably wanna
> look at fiber NICs and Switches, mellanox is popular but doesn't come
> for free. I do pure 6x10G Intel ixgbe in the 730s, all bonded LACP to
> Arista 7050X copper, total capacity per bond is 20Gbit (2x10G active
> active). I will test a pure flash array and see how it works too.
>
> If you have some very large flash array, you may wanna consider single
> socket. You can watch this video where they run DBs etc., on pure
> flash arrays and give some advice
> https://www.youtube.com/watch?v=OqlC7S3cUKs . thx will
>
> On Wed, Aug 31, 2016 at 8:01 PM, Matt Jarvis
>  wrote:
> > Time once again to dredge this topic up and see what the wider operators
> > community thinks this time :) There were a fair amount of summit
> submissions
> > for Barcelona talking about converged and hyper-converged
> infrastructure, it
> > seems to be the topic de jour from vendors at the minute despite feeling
> > like we've been round this before with Nebula, Piston Cloud etc.
> >
> > Like a lot of others we run Ceph, and we absolutely don't converge our
> > storage and compute nodes for a variety of performance and management
> > related reasons. In our experience, the hardware and tuning
> characteristics
> > of both types of nodes are pretty different, in any kind of recovery
> > scenarios Ceph eats memory, and it feels like creating a SPOF.
> >
> > Having said that, with pure SSD clusters becoming more common, some of
> those
> > issues may well be mitigated, so is anyone doing this in production now
> ? If
> > so, what does your hardware platform look like, and are there issues with
> > these kinds of architectures ?
> >
> > Matt
> >
> > DataCentred Limited registered in England and Wales no. 05611763
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>

-- 
DataCentred Limited registered in England and Wales no. 05611763
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack Consult Opp (Network/Horizon/VNC)

2016-09-03 Thread Adam Lawson
Is anyone *against* creating a mailing-list this sort of dialog? It's for
short-term immediate need/troubleshooting/my hair is on fire soert s of
requests.

Also, if i wanted to push this forward, what the next step since I'm not
seeing anyone volunteering other than JJ who tried and that's the last I
heard of it.

//adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Mon, Feb 8, 2016 at 5:20 PM, Robert Starmer  wrote:

> I thought this got stuck in the "do we need another list" and "well, what
> is our alternative" discussion.  So, no I don't recall any progress.  I
> still think it'd be useful to have a list. for this class of discussion.
>
> Robert
>
> On Wed, Feb 3, 2016 at 6:01 PM, Adam Lawson  wrote:
>
>> Hey all,
>>
>> Just curious how this was progressing. Is there an approval waiting to
>> happen or something in the background?
>>
>> //adam
>>
>>
>> *Adam Lawson*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>> On Tue, Nov 24, 2015 at 1:55 PM, JJ Asghar  wrote:
>>
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA512
>>>
>>> On 11/24/15 3:13 PM, Adam Lawson wrote:
>>> > Yep. I knew I was walking a gray line. If I had time to let folks
>>> > know about an opportunity and wait for folks to visit and reply I
>>> > totally would. Otherwise, I would definitely echo a job-related
>>> > mailing list if that could be setup?
>>> >
>>> >
>>> > On Tue, Nov 24, 2015 at 12:51 PM, JJ Asghar >> > > wrote:
>>> >
>>> >
>>> > As another place, other than the job posting site I just created
>>> > this review[1].
>>> >
>>> > If you like this idea please +1 it.
>>> >
>>> >
>>> > [1]: https://review.openstack.org/249415
>>>
>>>
>>> Yep, that's that review above. Go ahead and +1 it also, and we'll see
>>> if we can the list put together. :D
>>>
>>>
>>> - --
>>> Best Regards,
>>> JJ Asghar
>>> c: 512.619.0722 t: @jjasghar irc: j^2
>>> -BEGIN PGP SIGNATURE-
>>> Version: GnuPG/MacGPG2 v2
>>> Comment: GPGTools - https://gpgtools.org
>>>
>>> iQIcBAEBCgAGBQJWVNy8AAoJEDZbxzMH0+jTpEAP/3qnKoVP/iVKlFmZG7CrhLxC
>>> TveL3xNyTBGt+cMVQAJGChSH8v0l1+XuUKeIkRBsPCppe7PnPfIqPuCFJKuvCU+v
>>> gFDUvJbpqSXL+D7vfvc/83GUPa9rgvV//ovicbrvvJIsvl5LMuLpUgBVhZRDWzu2
>>> Q+VjdRFkHVCAHepQxBiyRav6Cwa5Sng3ODHrjlhKc4Kr2me25B+NDCqPbQz4/h6a
>>> T/nnHFrvZ0PQcYmNBONRe2WX7IqwJx/igV3+HuHXUtrtpKPRX7V12k5WdEGqbVDn
>>> LBEwJDYJWFXa8uo/WsbiaQfsO4iP8TZwP8aU/kcEB685HnRVr6Naw19VZwEvp8ZZ
>>> 5+WoT861u1VwWe244MvWgWLkWPVYsZIzCJAfcNmqVTQByc2VnozTUDoYWirjCPfS
>>> ka5Fj/t2jEh5fr6cR8x/h4yaq3DO345yVL2U9oj0NhczEpNYpFCdg66Huh8sMgca
>>> BwZmzd50UGTxq1tp5etT+XBeAqxhwd0iqgNZLXsTLQgJZ58giqgTymtgOGoROz1c
>>> PbvUoUgSgLTzr9PnyEAV09gZvyYr+8goR6RtfbDTWyykSqDrByr8qqef4DnxkyoM
>>> Y4lzp+vGkMK3RxJcVgM7fUmbIJTo42fpuwDFC39FmoSICJVqzYuNKcM6NlUmzXHE
>>> GbKDfBTzXrWTkgo1+0uW
>>> =T6en
>>> -END PGP SIGNATURE-
>>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack Consult Opp (Network/Horizon/VNC)

2016-09-03 Thread Adam Lawson
i ask because I have another opportunity but I don't know where to
evangelize it if not among those who might be interested.

//adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Sat, Sep 3, 2016 at 7:34 PM, Adam Lawson  wrote:

> Is anyone *against* creating a mailing-list this sort of dialog? It's for
> short-term immediate need/troubleshooting/my hair is on fire soert s of
> requests.
>
> Also, if i wanted to push this forward, what the next step since I'm not
> seeing anyone volunteering other than JJ who tried and that's the last I
> heard of it.
>
> //adam
>
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
> On Mon, Feb 8, 2016 at 5:20 PM, Robert Starmer  wrote:
>
>> I thought this got stuck in the "do we need another list" and "well, what
>> is our alternative" discussion.  So, no I don't recall any progress.  I
>> still think it'd be useful to have a list. for this class of discussion.
>>
>> Robert
>>
>> On Wed, Feb 3, 2016 at 6:01 PM, Adam Lawson  wrote:
>>
>>> Hey all,
>>>
>>> Just curious how this was progressing. Is there an approval waiting to
>>> happen or something in the background?
>>>
>>> //adam
>>>
>>>
>>> *Adam Lawson*
>>>
>>> AQORN, Inc.
>>> 427 North Tatnall Street
>>> Ste. 58461
>>> Wilmington, Delaware 19801-2230
>>> Toll-free: (844) 4-AQORN-NOW ext. 101
>>> International: +1 302-387-4660
>>> Direct: +1 916-246-2072
>>>
>>> On Tue, Nov 24, 2015 at 1:55 PM, JJ Asghar  wrote:
>>>
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 11/24/15 3:13 PM, Adam Lawson wrote:
 > Yep. I knew I was walking a gray line. If I had time to let folks
 > know about an opportunity and wait for folks to visit and reply I
 > totally would. Otherwise, I would definitely echo a job-related
 > mailing list if that could be setup?
 >
 >
 > On Tue, Nov 24, 2015 at 12:51 PM, JJ Asghar >>> > > wrote:
 >
 >
 > As another place, other than the job posting site I just created
 > this review[1].
 >
 > If you like this idea please +1 it.
 >
 >
 > [1]: https://review.openstack.org/249415


 Yep, that's that review above. Go ahead and +1 it also, and we'll see
 if we can the list put together. :D


 - --
 Best Regards,
 JJ Asghar
 c: 512.619.0722 t: @jjasghar irc: j^2
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2
 Comment: GPGTools - https://gpgtools.org

 iQIcBAEBCgAGBQJWVNy8AAoJEDZbxzMH0+jTpEAP/3qnKoVP/iVKlFmZG7CrhLxC
 TveL3xNyTBGt+cMVQAJGChSH8v0l1+XuUKeIkRBsPCppe7PnPfIqPuCFJKuvCU+v
 gFDUvJbpqSXL+D7vfvc/83GUPa9rgvV//ovicbrvvJIsvl5LMuLpUgBVhZRDWzu2
 Q+VjdRFkHVCAHepQxBiyRav6Cwa5Sng3ODHrjlhKc4Kr2me25B+NDCqPbQz4/h6a
 T/nnHFrvZ0PQcYmNBONRe2WX7IqwJx/igV3+HuHXUtrtpKPRX7V12k5WdEGqbVDn
 LBEwJDYJWFXa8uo/WsbiaQfsO4iP8TZwP8aU/kcEB685HnRVr6Naw19VZwEvp8ZZ
 5+WoT861u1VwWe244MvWgWLkWPVYsZIzCJAfcNmqVTQByc2VnozTUDoYWirjCPfS
 ka5Fj/t2jEh5fr6cR8x/h4yaq3DO345yVL2U9oj0NhczEpNYpFCdg66Huh8sMgca
 BwZmzd50UGTxq1tp5etT+XBeAqxhwd0iqgNZLXsTLQgJZ58giqgTymtgOGoROz1c
 PbvUoUgSgLTzr9PnyEAV09gZvyYr+8goR6RtfbDTWyykSqDrByr8qqef4DnxkyoM
 Y4lzp+vGkMK3RxJcVgM7fUmbIJTo42fpuwDFC39FmoSICJVqzYuNKcM6NlUmzXHE
 GbKDfBTzXrWTkgo1+0uW
 =T6en
 -END PGP SIGNATURE-

>>>
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack Consult Opp (Network/Horizon/VNC)

2016-09-03 Thread John van Ommen
Hewlett Packard Enterprise has a professional services team dedicated to
fulfilling projects like this. We've been doing this for years now, with
numerous clients, all over the world.

I can connect you with a sales rep if you'd like. (I'm on the integration
side, not sales)

John

On Sep 3, 2016 7:40 PM, "Adam Lawson"  wrote:

> Is anyone *against* creating a mailing-list this sort of dialog? It's for
> short-term immediate need/troubleshooting/my hair is on fire soert s of
> requests.
>
> Also, if i wanted to push this forward, what the next step since I'm not
> seeing anyone volunteering other than JJ who tried and that's the last I
> heard of it.
>
> //adam
>
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
> On Mon, Feb 8, 2016 at 5:20 PM, Robert Starmer  wrote:
>
>> I thought this got stuck in the "do we need another list" and "well, what
>> is our alternative" discussion.  So, no I don't recall any progress.  I
>> still think it'd be useful to have a list. for this class of discussion.
>>
>> Robert
>>
>> On Wed, Feb 3, 2016 at 6:01 PM, Adam Lawson  wrote:
>>
>>> Hey all,
>>>
>>> Just curious how this was progressing. Is there an approval waiting to
>>> happen or something in the background?
>>>
>>> //adam
>>>
>>>
>>> *Adam Lawson*
>>>
>>> AQORN, Inc.
>>> 427 North Tatnall Street
>>> Ste. 58461
>>> Wilmington, Delaware 19801-2230
>>> Toll-free: (844) 4-AQORN-NOW ext. 101
>>> International: +1 302-387-4660
>>> Direct: +1 916-246-2072
>>>
>>> On Tue, Nov 24, 2015 at 1:55 PM, JJ Asghar  wrote:
>>>
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 11/24/15 3:13 PM, Adam Lawson wrote:
 > Yep. I knew I was walking a gray line. If I had time to let folks
 > know about an opportunity and wait for folks to visit and reply I
 > totally would. Otherwise, I would definitely echo a job-related
 > mailing list if that could be setup?
 >
 >
 > On Tue, Nov 24, 2015 at 12:51 PM, JJ Asghar >>> > > wrote:
 >
 >
 > As another place, other than the job posting site I just created
 > this review[1].
 >
 > If you like this idea please +1 it.
 >
 >
 > [1]: https://review.openstack.org/249415


 Yep, that's that review above. Go ahead and +1 it also, and we'll see
 if we can the list put together. :D


 - --
 Best Regards,
 JJ Asghar
 c: 512.619.0722 t: @jjasghar irc: j^2
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2
 Comment: GPGTools - https://gpgtools.org

 iQIcBAEBCgAGBQJWVNy8AAoJEDZbxzMH0+jTpEAP/3qnKoVP/iVKlFmZG7CrhLxC
 TveL3xNyTBGt+cMVQAJGChSH8v0l1+XuUKeIkRBsPCppe7PnPfIqPuCFJKuvCU+v
 gFDUvJbpqSXL+D7vfvc/83GUPa9rgvV//ovicbrvvJIsvl5LMuLpUgBVhZRDWzu2
 Q+VjdRFkHVCAHepQxBiyRav6Cwa5Sng3ODHrjlhKc4Kr2me25B+NDCqPbQz4/h6a
 T/nnHFrvZ0PQcYmNBONRe2WX7IqwJx/igV3+HuHXUtrtpKPRX7V12k5WdEGqbVDn
 LBEwJDYJWFXa8uo/WsbiaQfsO4iP8TZwP8aU/kcEB685HnRVr6Naw19VZwEvp8ZZ
 5+WoT861u1VwWe244MvWgWLkWPVYsZIzCJAfcNmqVTQByc2VnozTUDoYWirjCPfS
 ka5Fj/t2jEh5fr6cR8x/h4yaq3DO345yVL2U9oj0NhczEpNYpFCdg66Huh8sMgca
 BwZmzd50UGTxq1tp5etT+XBeAqxhwd0iqgNZLXsTLQgJZ58giqgTymtgOGoROz1c
 PbvUoUgSgLTzr9PnyEAV09gZvyYr+8goR6RtfbDTWyykSqDrByr8qqef4DnxkyoM
 Y4lzp+vGkMK3RxJcVgM7fUmbIJTo42fpuwDFC39FmoSICJVqzYuNKcM6NlUmzXHE
 GbKDfBTzXrWTkgo1+0uW
 =T6en
 -END PGP SIGNATURE-

>>>
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators