Re: [Openstack-operators] Tokyo Summit Ops Design Summit Tracks - Agenda Brainstorming

2015-09-08 Thread Tom Fifield

Ping!

This is your chance to provide input on our design summit track for 
Tokyo. Add your ideas on the etherpad below!



https://etherpad.openstack.org/p/TYO-ops-meetup


On 03/09/15 03:27, Tom Fifield wrote:

Hi all,

Thanks for those who made it to the recent meetup in Palo Alto. It was a
fantastic couple of days, and many are excited to get started on talking
about our ops track in the Tokyo design summit.


Recall that this is in addition to the operations and other conference
track's presentations. It's aimed at giving us a design-summit-style
place to congregate, swap best practices, ideas and give feedback.


As usual, we're working to act on the feedback from all past events to
make this one better than ever. One that we continue to work on is the
need to see action happen as a result of this event, so please - when
you are suggesting sessions in the below etherpad please try and phrase
them in a way that will probably result in things happening afterward.


**

Please propose session ideas on:

https://etherpad.openstack.org/p/TYO-ops-meetup

ensuring each session suggestion will have a result.

**


The room allocations are still being worked out, but the current
thinking is that we will interleave general sessions and working groups
across Tuesday and Wednesday, to allow for attendance from ops in the
cross-project sessions.


More as it comes, and as always, further information about ops meetups
and notes from the past can be found on the wiki @:

https://wiki.openstack.org/wiki/Operations/Meetups

Finally, don't forget to register ASAP!
http://www.eventbrite.com/e/openstack-summit-october-2015-tokyo-tickets-17356780598


Regards,


Tom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Tools/Monitoring] Finding a new Tools and Monitoring Working group Time

2015-09-08 Thread Joseph Bajin
Hi Everyone,

One of the items taken from the Operators Meet-up was that the time (10am
EST) was too early for the west coast people.   I've been looking at the
meeting calendar and have a few suggested times that I'd like to see if
they would work for others to get more participation.

Odd every other Wednesday's - ( 9/9, 9/23, etc)

Suggested times - (All these are in EST)

- 1pm
- 2pm
- 3pm

If Wednesday's do not work, please let me know as well, and I can find
additional times.  It seemed that Wednesdays worked for most when we first
stood up the group, but it could be moved.

--Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Tools/Monitoring] Finding a new Tools and Monitoring Working group Time

2015-09-08 Thread Behzad Dastur
I have been meaning to join this meeting. Thanks for moving the meeting by
couple of hours.

Behzad

On Tue, Sep 8, 2015 at 4:42 AM, Joseph Bajin  wrote:

> Hi Everyone,
>
> One of the items taken from the Operators Meet-up was that the time (10am
> EST) was too early for the west coast people.   I've been looking at the
> meeting calendar and have a few suggested times that I'd like to see if
> they would work for others to get more participation.
>
> Odd every other Wednesday's - ( 9/9, 9/23, etc)
>
> Suggested times - (All these are in EST)
>
> - 1pm
> - 2pm
> - 3pm
>
> If Wednesday's do not work, please let me know as well, and I can find
> additional times.  It seemed that Wednesdays worked for most when we first
> stood up the group, but it could be moved.
>
> --Joe
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [aodh][ceilometer] (re)introducing Aodh - OpenStack Alarming

2015-09-08 Thread gord chung

hi all,

as you may have heard, in an effort to simplify OpenStack Telemetry 
(Ceilometer) and streamline it's code, the alarming functionality 
provided by OpenStack Telemetry has been moved to it's own 
repository[1]. The new project is called Aodh[2]. the idea is that Aodh 
will grow as it's own entity, with it's own distinct core team, under 
the Telemetry umbrella. this way, we will have a focused team 
specifically for the alarming aspects of Telemetry. as always, feedback 
and contributions are welcomed[3].


in the coming days, we will release a migration/changes document to 
explain the differences between the original alarming code and Aodh. all 
effort was made to maintain compatibility in configurations such that it 
should be possible to take the existing configuration and reuse it for 
Aodh deployment.


some quick notes:
- the existing alarming code will remain consumable for Liberty release 
(but in deprecated state)
- all new functionality (ie. inline/streaming alarm evaluations) will be 
added only to Aodh
- client and api support has been added to common Ceilometer interfaces 
such that if Aodh is enabled, the client can still be used and redirect 
to Aodh.

- mailing list items can be tagged with [aodh]
- irc discussions will remain under #openstack-ceilometer

many thanks for all those who worked on the code split and integration 
testing.


[1] https://github.com/openstack/aodh
[2] http://www.behindthename.com/name/aodh
[3] https://launchpad.net/aodh

cheers,

--
gord


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Milti-site Keystone & Galera

2015-09-08 Thread Jonathan Proulx
Hi All,

I'm pretty close to opening a second region in my cloud at a second
physical location.

The plan so far had been to only share keystone between the regions
(nova, glance, cinder etc would be distinct) and implement this by
using MariaDB with galera replication between sites with each site
having it's own gmcast_segment to minimize the long distance catter
plus a 3rd site with a galera arbitrator for the obvious reason.

Today I was warned against using this in a multi writer setup. I'd planned
 on one writer per physical location.

I had been under the impression this was the 'done thing' for
geographically sepperate regions, was I wrong? Should I replicate just
for DR and always pick a single possible remote write site?

site to site link is 2x10G (different physical paths), short link is
2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
link shouldn't be much longer but isn't yet complete to test.

-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-08 Thread Jay Pipes

On 09/08/2015 04:44 PM, Jonathan Proulx wrote:

Hi All,

I'm pretty close to opening a second region in my cloud at a second
physical location.

The plan so far had been to only share keystone between the regions
(nova, glance, cinder etc would be distinct) and implement this by
using MariaDB with galera replication between sites with each site
having it's own gmcast_segment to minimize the long distance catter
plus a 3rd site with a galera arbitrator for the obvious reason.


I would also strongly consider adding the Glance registry database to 
the same cross-WAN Galera cluster. At AT&T, we had such a setup for 
Keystone and Glance registry databases at 10+ deployment zones across 6+ 
datacenters across the nation. Besides adjusting the latency timeout for 
the Galera settings, we made no other modifications to our 
internal-to-an-availability-zone Nova database Galera cluster settings.


The Keystone and Glance registry databases have a virtually identical 
read and write data access pattern: small record/row size, small number 
of INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT 
operations on a small data set. This data access pattern is an ideal fit 
for a WAN-replicated Galera cluster.



Today I was warned against using this in a multi writer setup. I'd planned
  on one writer per physical location.


I don't know who warned you about this, but it's not an issue in the 
real world. We ran in full multi-writer mode, with each deployment zone 
writing to and reading from its nearest Galera cluster nodes. No issues.


Best,
-jay


I had been under the impression this was the 'done thing' for
geographically sepperate regions, was I wrong? Should I replicate just
for DR and always pick a single possible remote write site?

site to site link is 2x10G (different physical paths), short link is
2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
link shouldn't be much longer but isn't yet complete to test.

-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-08 Thread Jonathan Proulx
Thanks Jay & Matt,

That's basically what I thought, so I'll keep thinking it :)

We're not replicating glance DB because images will be stored in
different local Ceph storage on each side so the images won't be
directly available.  We thought about moving back to a file back end
and rsync'ing but RBD gets us lots of fun things we want to keep
(quick start, copy on write thin cloned ephemeral storage etc...) so
decided to live with making our users copy images around.

-Jon



On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes  wrote:
> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
>>
>> Hi All,
>>
>> I'm pretty close to opening a second region in my cloud at a second
>> physical location.
>>
>> The plan so far had been to only share keystone between the regions
>> (nova, glance, cinder etc would be distinct) and implement this by
>> using MariaDB with galera replication between sites with each site
>> having it's own gmcast_segment to minimize the long distance catter
>> plus a 3rd site with a galera arbitrator for the obvious reason.
>
>
> I would also strongly consider adding the Glance registry database to the
> same cross-WAN Galera cluster. At AT&T, we had such a setup for Keystone and
> Glance registry databases at 10+ deployment zones across 6+ datacenters
> across the nation. Besides adjusting the latency timeout for the Galera
> settings, we made no other modifications to our
> internal-to-an-availability-zone Nova database Galera cluster settings.
>
> The Keystone and Glance registry databases have a virtually identical read
> and write data access pattern: small record/row size, small number of
> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT operations
> on a small data set. This data access pattern is an ideal fit for a
> WAN-replicated Galera cluster.
>
>> Today I was warned against using this in a multi writer setup. I'd planned
>>   on one writer per physical location.
>
>
> I don't know who warned you about this, but it's not an issue in the real
> world. We ran in full multi-writer mode, with each deployment zone writing
> to and reading from its nearest Galera cluster nodes. No issues.
>
> Best,
> -jay
>
>> I had been under the impression this was the 'done thing' for
>> geographically sepperate regions, was I wrong? Should I replicate just
>> for DR and always pick a single possible remote write site?
>>
>> site to site link is 2x10G (different physical paths), short link is
>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
>> link shouldn't be much longer but isn't yet complete to test.
>>
>> -Jon
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators