Re: [Openstack-operators] nova-neutron with vsphere

2015-09-28 Thread Miko Bello

   Thanks all for your contribution on my request.
   I already knew VIO ( previous version 1.0 ) but i just wanted to see what I 
could (or could not) do without NSX .

   Miko
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [ceilometer] OpenStack Telemetry user survey

2015-09-28 Thread gord chung

Hello,

The OpenStack Telemetry (aka Ceilometer) team would like to collect 
feedback and information from its user base in order to drive future 
improvements to the project.  To do so, we have developed a survey. It 
should take about 15min to complete.
Questions are fairly technical, so please ensure that you ask someone 
within your organization that is hands on using Ceilometer.


https://goo.gl/rKNhM1

On behalf of the Ceilometer community, we thank you for the time you 
will spend in helping us understand your needs.


--
Gordon Chung
Ceilometer PTL

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ceilometer] OpenStack Telemetry user survey

2015-09-28 Thread Tim Bell
There seems to be a lot of overlap with the user survey which has just 
finished.  

Feel free to get in touch on the user-commit...@lists.openstack.org if you
have questions to suggest to the survey or would like specific queries to be
run on the anonymised data.

There is a significant risk of over surveying the operator community and
then we would lose all the valuable feedback.

Tim

> -Original Message-
> From: gord chung [mailto:g...@live.ca]
> Sent: 28 September 2015 17:18
> To: openstack-operators@lists.openstack.org
> Subject: [Openstack-operators] [ceilometer] OpenStack Telemetry user
> survey
> 
> Hello,
> 
> The OpenStack Telemetry (aka Ceilometer) team would like to collect
> feedback and information from its user base in order to drive future
> improvements to the project.  To do so, we have developed a survey. It
> should take about 15min to complete.
> Questions are fairly technical, so please ensure that you ask someone
within
> your organization that is hands on using Ceilometer.
> 
>  https://goo.gl/rKNhM1
> 
> On behalf of the Ceilometer community, we thank you for the time you will
> spend in helping us understand your needs.
> 
> --
> Gordon Chung
> Ceilometer PTL
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-28 Thread Curtis
Hi,

For organizations with the keystone database shared across regions via
galera, do you just have keystone (and perhaps glance as was
suggested) in its own cluster that is multi-region, and the other
databases in a cluster that is only in one region (ie. just local
their their region)? Or are you giving other services their own
database in the single multi-region cluster and thus replicating all
the databases? Or is there another solution?

Thanks,
Curtis.

On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx  wrote:
> Thanks Jay & Matt,
>
> That's basically what I thought, so I'll keep thinking it :)
>
> We're not replicating glance DB because images will be stored in
> different local Ceph storage on each side so the images won't be
> directly available.  We thought about moving back to a file back end
> and rsync'ing but RBD gets us lots of fun things we want to keep
> (quick start, copy on write thin cloned ephemeral storage etc...) so
> decided to live with making our users copy images around.
>
> -Jon
>
>
>
> On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes  wrote:
>> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
>>>
>>> Hi All,
>>>
>>> I'm pretty close to opening a second region in my cloud at a second
>>> physical location.
>>>
>>> The plan so far had been to only share keystone between the regions
>>> (nova, glance, cinder etc would be distinct) and implement this by
>>> using MariaDB with galera replication between sites with each site
>>> having it's own gmcast_segment to minimize the long distance catter
>>> plus a 3rd site with a galera arbitrator for the obvious reason.
>>
>>
>> I would also strongly consider adding the Glance registry database to the
>> same cross-WAN Galera cluster. At AT&T, we had such a setup for Keystone and
>> Glance registry databases at 10+ deployment zones across 6+ datacenters
>> across the nation. Besides adjusting the latency timeout for the Galera
>> settings, we made no other modifications to our
>> internal-to-an-availability-zone Nova database Galera cluster settings.
>>
>> The Keystone and Glance registry databases have a virtually identical read
>> and write data access pattern: small record/row size, small number of
>> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT operations
>> on a small data set. This data access pattern is an ideal fit for a
>> WAN-replicated Galera cluster.
>>
>>> Today I was warned against using this in a multi writer setup. I'd planned
>>>   on one writer per physical location.
>>
>>
>> I don't know who warned you about this, but it's not an issue in the real
>> world. We ran in full multi-writer mode, with each deployment zone writing
>> to and reading from its nearest Galera cluster nodes. No issues.
>>
>> Best,
>> -jay
>>
>>> I had been under the impression this was the 'done thing' for
>>> geographically sepperate regions, was I wrong? Should I replicate just
>>> for DR and always pick a single possible remote write site?
>>>
>>> site to site link is 2x10G (different physical paths), short link is
>>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
>>> link shouldn't be much longer but isn't yet complete to test.
>>>
>>> -Jon
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Twitter: @serverascode
Blog: serverascode.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-28 Thread Matt Fischer
Yes. We have a separate DB cluster for global stuff like Keystone &
Designate, and a regional cluster for things like nova/neutron etc.

On Mon, Sep 28, 2015 at 10:43 AM, Curtis  wrote:

> Hi,
>
> For organizations with the keystone database shared across regions via
> galera, do you just have keystone (and perhaps glance as was
> suggested) in its own cluster that is multi-region, and the other
> databases in a cluster that is only in one region (ie. just local
> their their region)? Or are you giving other services their own
> database in the single multi-region cluster and thus replicating all
> the databases? Or is there another solution?
>
> Thanks,
> Curtis.
>
> On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx  wrote:
> > Thanks Jay & Matt,
> >
> > That's basically what I thought, so I'll keep thinking it :)
> >
> > We're not replicating glance DB because images will be stored in
> > different local Ceph storage on each side so the images won't be
> > directly available.  We thought about moving back to a file back end
> > and rsync'ing but RBD gets us lots of fun things we want to keep
> > (quick start, copy on write thin cloned ephemeral storage etc...) so
> > decided to live with making our users copy images around.
> >
> > -Jon
> >
> >
> >
> > On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes  wrote:
> >> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I'm pretty close to opening a second region in my cloud at a second
> >>> physical location.
> >>>
> >>> The plan so far had been to only share keystone between the regions
> >>> (nova, glance, cinder etc would be distinct) and implement this by
> >>> using MariaDB with galera replication between sites with each site
> >>> having it's own gmcast_segment to minimize the long distance catter
> >>> plus a 3rd site with a galera arbitrator for the obvious reason.
> >>
> >>
> >> I would also strongly consider adding the Glance registry database to
> the
> >> same cross-WAN Galera cluster. At AT&T, we had such a setup for
> Keystone and
> >> Glance registry databases at 10+ deployment zones across 6+ datacenters
> >> across the nation. Besides adjusting the latency timeout for the Galera
> >> settings, we made no other modifications to our
> >> internal-to-an-availability-zone Nova database Galera cluster settings.
> >>
> >> The Keystone and Glance registry databases have a virtually identical
> read
> >> and write data access pattern: small record/row size, small number of
> >> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT
> operations
> >> on a small data set. This data access pattern is an ideal fit for a
> >> WAN-replicated Galera cluster.
> >>
> >>> Today I was warned against using this in a multi writer setup. I'd
> planned
> >>>   on one writer per physical location.
> >>
> >>
> >> I don't know who warned you about this, but it's not an issue in the
> real
> >> world. We ran in full multi-writer mode, with each deployment zone
> writing
> >> to and reading from its nearest Galera cluster nodes. No issues.
> >>
> >> Best,
> >> -jay
> >>
> >>> I had been under the impression this was the 'done thing' for
> >>> geographically sepperate regions, was I wrong? Should I replicate just
> >>> for DR and always pick a single possible remote write site?
> >>>
> >>> site to site link is 2x10G (different physical paths), short link is
> >>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
> >>> link shouldn't be much longer but isn't yet complete to test.
> >>>
> >>> -Jon
> >>>
> >>> ___
> >>> OpenStack-operators mailing list
> >>> OpenStack-operators@lists.openstack.org
> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> --
> Twitter: @serverascode
> Blog: serverascode.com
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-28 Thread Tim Bell
CERN do the same…. The memcache functions on keystone are very useful for 
scaling it up.

 

Tim

 

From: Matt Fischer [mailto:m...@mattfischer.com] 
Sent: 28 September 2015 18:51
To: Curtis 
Cc: openstack-operators@lists.openstack.org; Jonathan Proulx 

Subject: Re: [Openstack-operators] Milti-site Keystone & Galera

 

Yes. We have a separate DB cluster for global stuff like Keystone & Designate, 
and a regional cluster for things like nova/neutron etc.

 

On Mon, Sep 28, 2015 at 10:43 AM, Curtis mailto:serverasc...@gmail.com> > wrote:

Hi,

For organizations with the keystone database shared across regions via
galera, do you just have keystone (and perhaps glance as was
suggested) in its own cluster that is multi-region, and the other
databases in a cluster that is only in one region (ie. just local
their their region)? Or are you giving other services their own
database in the single multi-region cluster and thus replicating all
the databases? Or is there another solution?

Thanks,
Curtis.


On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx mailto:j...@jonproulx.com> > wrote:
> Thanks Jay & Matt,
>
> That's basically what I thought, so I'll keep thinking it :)
>
> We're not replicating glance DB because images will be stored in
> different local Ceph storage on each side so the images won't be
> directly available.  We thought about moving back to a file back end
> and rsync'ing but RBD gets us lots of fun things we want to keep
> (quick start, copy on write thin cloned ephemeral storage etc...) so
> decided to live with making our users copy images around.
>
> -Jon
>
>
>
> On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes   > wrote:
>> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
>>>
>>> Hi All,
>>>
>>> I'm pretty close to opening a second region in my cloud at a second
>>> physical location.
>>>
>>> The plan so far had been to only share keystone between the regions
>>> (nova, glance, cinder etc would be distinct) and implement this by
>>> using MariaDB with galera replication between sites with each site
>>> having it's own gmcast_segment to minimize the long distance catter
>>> plus a 3rd site with a galera arbitrator for the obvious reason.
>>
>>
>> I would also strongly consider adding the Glance registry database to the
>> same cross-WAN Galera cluster. At AT&T, we had such a setup for Keystone and
>> Glance registry databases at 10+ deployment zones across 6+ datacenters
>> across the nation. Besides adjusting the latency timeout for the Galera
>> settings, we made no other modifications to our
>> internal-to-an-availability-zone Nova database Galera cluster settings.
>>
>> The Keystone and Glance registry databases have a virtually identical read
>> and write data access pattern: small record/row size, small number of
>> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT operations
>> on a small data set. This data access pattern is an ideal fit for a
>> WAN-replicated Galera cluster.
>>
>>> Today I was warned against using this in a multi writer setup. I'd planned
>>>   on one writer per physical location.
>>
>>
>> I don't know who warned you about this, but it's not an issue in the real
>> world. We ran in full multi-writer mode, with each deployment zone writing
>> to and reading from its nearest Galera cluster nodes. No issues.
>>
>> Best,
>> -jay
>>
>>> I had been under the impression this was the 'done thing' for
>>> geographically sepperate regions, was I wrong? Should I replicate just
>>> for DR and always pick a single possible remote write site?
>>>
>>> site to site link is 2x10G (different physical paths), short link is
>>> 2.2ms average latency (2.1ms low, 2.5ms high over 250 packets) long
>>> link shouldn't be much longer but isn't yet complete to test.
>>>
>>> -Jon
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org 
>>>  
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org 
>>  
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
>  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Twitter: @serverascode
Blog: serverascode.com  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org 
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 



smime.p7s
Description: S/MIME cryptographic signature
___

Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-28 Thread Federico Michele Facca
considering that across dc, latency increase, and that latency may cause
brain splits easily in high read/write, data of other services that are not
global should never be replicated on multisite using a synch approach
(asynch one for disaster recovery, may be good enough!)

Indeed, as Tim said, one of the most important things regarding keystone
distribution, is leveraging on memcached.

Somehow you deal with two totally different data types within keystone:
- user/projects/domain (quite static, i.e. it is not that every second you
change tons of these - so mostly read) -> perfect for db persistency
- tokens (quite dynamics - lot write and read)  -> better managed by
memcached.

An OPNFV group did an interesting analysis on the multisite IdM:

https://etherpad.opnfv.org/p/multisite_identity_management

I think most of the possible architectures are discussed with pro and cons.

Br,
Federico

--
Future Internet is closer than you think!
http://www.fiware.org

Official Mirantis partner for OpenStack Training
https://www.create-net.org/community/openstack-training

-- 
Dr. Federico M. Facca

CREATE-NET
Via alla Cascata 56/D
38123 Povo Trento (Italy)

P  +39 0461 312471
M +39 334 6049758
E  federico.fa...@create-net.org
T @chicco785
W  www.create-net.org

On Mon, Sep 28, 2015 at 7:17 PM, Tim Bell  wrote:

> CERN do the same…. The memcache functions on keystone are very useful for
> scaling it up.
>
>
>
> Tim
>
>
>
> *From:* Matt Fischer [mailto:m...@mattfischer.com]
> *Sent:* 28 September 2015 18:51
> *To:* Curtis 
> *Cc:* openstack-operators@lists.openstack.org; Jonathan Proulx <
> j...@jonproulx.com>
> *Subject:* Re: [Openstack-operators] Milti-site Keystone & Galera
>
>
>
> Yes. We have a separate DB cluster for global stuff like Keystone &
> Designate, and a regional cluster for things like nova/neutron etc.
>
>
>
> On Mon, Sep 28, 2015 at 10:43 AM, Curtis  wrote:
>
> Hi,
>
> For organizations with the keystone database shared across regions via
> galera, do you just have keystone (and perhaps glance as was
> suggested) in its own cluster that is multi-region, and the other
> databases in a cluster that is only in one region (ie. just local
> their their region)? Or are you giving other services their own
> database in the single multi-region cluster and thus replicating all
> the databases? Or is there another solution?
>
> Thanks,
> Curtis.
>
>
> On Tue, Sep 8, 2015 at 3:22 PM, Jonathan Proulx  wrote:
> > Thanks Jay & Matt,
> >
> > That's basically what I thought, so I'll keep thinking it :)
> >
> > We're not replicating glance DB because images will be stored in
> > different local Ceph storage on each side so the images won't be
> > directly available.  We thought about moving back to a file back end
> > and rsync'ing but RBD gets us lots of fun things we want to keep
> > (quick start, copy on write thin cloned ephemeral storage etc...) so
> > decided to live with making our users copy images around.
> >
> > -Jon
> >
> >
> >
> > On Tue, Sep 8, 2015 at 5:00 PM, Jay Pipes  wrote:
> >> On 09/08/2015 04:44 PM, Jonathan Proulx wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I'm pretty close to opening a second region in my cloud at a second
> >>> physical location.
> >>>
> >>> The plan so far had been to only share keystone between the regions
> >>> (nova, glance, cinder etc would be distinct) and implement this by
> >>> using MariaDB with galera replication between sites with each site
> >>> having it's own gmcast_segment to minimize the long distance catter
> >>> plus a 3rd site with a galera arbitrator for the obvious reason.
> >>
> >>
> >> I would also strongly consider adding the Glance registry database to
> the
> >> same cross-WAN Galera cluster. At AT&T, we had such a setup for
> Keystone and
> >> Glance registry databases at 10+ deployment zones across 6+ datacenters
> >> across the nation. Besides adjusting the latency timeout for the Galera
> >> settings, we made no other modifications to our
> >> internal-to-an-availability-zone Nova database Galera cluster settings.
> >>
> >> The Keystone and Glance registry databases have a virtually identical
> read
> >> and write data access pattern: small record/row size, small number of
> >> INSERTs, virtually no UPDATE and DELETE calls, and heavy SELECT
> operations
> >> on a small data set. This data access pattern is an ideal fit for a
> >> WAN-replicated Galera cluster.
> >>
> >>> Today I was warned against using this in a multi writer setup. I'd
> planned
> >>>   on one writer per physical location.
> >>
> >>
> >> I don't know who warned you about this, but it's not an issue in the
> real
> >> world. We ran in full multi-writer mode, with each deployment zone
> writing
> >> to and reading from its nearest Galera cluster nodes. No issues.
> >>
> >> Best,
> >> -jay
> >>
> >>> I had been under the impression this was the 'done thing' for
> >>> geographically sepperate regions, was I wrong? Should I replicate just
> >>> for DR and always pick a single possible re

Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-28 Thread Jay Pipes

On 09/28/2015 12:51 PM, Matt Fischer wrote:

Yes. We have a separate DB cluster for global stuff like Keystone &
Designate, and a regional cluster for things like nova/neutron etc.


Yep, this ^

-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ceilometer] OpenStack Telemetry user survey

2015-09-28 Thread gord chung

hi Tim,

it's not our intention to over-survey the community -- i apologise if 
this is your takeaway. the user survey that just finished was aimed to 
gather information regarding general OpenStack practices (one question 
being assigned to each project).


the idea here is to have a dialogue between user and developers 
specifically regarding Ceilometer. creating this survey, the goal is to 
gather specific information regarding each of the components of 
Ceilometer so we know to 'work on component xyz of Ceilometer' rather 
than 'work on Ceilometer'.


please have a look at the survey at your own convenience and interest -- 
feedback is welcomed at any time. this will help the community 
continuously know what the use cases/gaps are.



On 28/09/2015 12:24 PM, Tim Bell wrote:

There seems to be a lot of overlap with the user survey which has just
finished.

Feel free to get in touch on the user-commit...@lists.openstack.org if you
have questions to suggest to the survey or would like specific queries to be
run on the anonymised data.

There is a significant risk of over surveying the operator community and
then we would lose all the valuable feedback.

Tim


-Original Message-
From: gord chung [mailto:g...@live.ca]
Sent: 28 September 2015 17:18
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] [ceilometer] OpenStack Telemetry user
survey

Hello,

The OpenStack Telemetry (aka Ceilometer) team would like to collect
feedback and information from its user base in order to drive future
improvements to the project.  To do so, we have developed a survey. It
should take about 15min to complete.
Questions are fairly technical, so please ensure that you ask someone

within

your organization that is hands on using Ceilometer.

  https://goo.gl/rKNhM1

On behalf of the Ceilometer community, we thank you for the time you will
spend in helping us understand your needs.

--
Gordon Chung
Ceilometer PTL

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
gord


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-operators][osops] First contribution

2015-09-28 Thread JJ Asghar

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hey everyone!

I wanted to point out that we have had our first official contribution
to OSOps today. [1][2] comes from David Wahlstrom, and I want to
personally thank him for this.

For the time being we are just looking for contributions, so if you have
something that can fit please don't hesitate to commit!

Everyone please take a moment for David to thank him for getting the
ball rolling!

[1]: https://review.openstack.org/228545
[2]: https://review.openstack.org/228534

- -JJ

- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWCYe2AAoJEDZbxzMH0+jTKwsP/1W33yZ9vzn7mzdH6029Es8U
6jcPlikyboQBAedVeU9k9vzO+Mh5fPXvuk88dkX40+07Rok2VN6gS1/q9Lqr85h9
/xH3DzHoMF3bIoo3d54ULUzjfq08b6BrjBSGKd3rd13G4Dr81YAKwqCMHI/PPTS/
bJHXUsjFvZOanicPt1ndw5tdQcd2Rm+kZeiWl4xY/zFDreAXTYtUqAlVPXy+ueLk
LHg7DLMzA5Sztx0bW7NvmqN3ZftVuyXfSz53xtsXqh7CBgVrTXd7vT1luCI3Ytea
ABymELKbW+fYlKZ4WURTFvMNOqtGg6YLS8vA3UuMA1MdUAXUVzO+xx46D/XBEdcY
WPjmRDjYinJpJmeNepkj2rUJ39joNILdv70rt2vkf1uvzr0pGaZwmVydozy9EZ9T
isGTJXWBwakrEn7tkt3VUBTofnOREtdeRA6aXdK5Y40suKzrexfxPRu5hHF/Bsei
Khzrn1SX6b1yVzpnCJA/V3LU341P1CbDCKS/ljhHS4PJpS/pNCfQcEJ/00pywe8L
Y9t+w9GFmDC/2zHD1N3ak2eVmZ6nfrzlW/P0oi/l3itsuo4A90o/Yh52TsW6gn2q
Xo5d7SUILhl9JxfdyY63hFvCbPqlHGTelVhcZNpwUwbo2ri0xE1w19C/nODtlntA
Gpu6jIZPtsbi7bAYdB/+
=J+RD
-END PGP SIGNATURE-


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ceilometer] OpenStack Telemetry user survey

2015-09-28 Thread Tim Bell
I have started a thread on user-commit...@lists.openstack.org so we can try
to find the right balance between ensuring the surveys are completed and
meeting the needs for more detailed information for the projects.

Tim

> -Original Message-
> From: gord chung [mailto:g...@live.ca]
> Sent: 28 September 2015 20:31
> To: Tim Bell ; openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [ceilometer] OpenStack Telemetry user
> survey
> 
> hi Tim,
> 
> it's not our intention to over-survey the community -- i apologise if this
is
> your takeaway. the user survey that just finished was aimed to gather
> information regarding general OpenStack practices (one question being
> assigned to each project).
> 
> the idea here is to have a dialogue between user and developers
specifically
> regarding Ceilometer. creating this survey, the goal is to gather specific
> information regarding each of the components of Ceilometer so we know to
> 'work on component xyz of Ceilometer' rather than 'work on Ceilometer'.
> 
> please have a look at the survey at your own convenience and interest --
> feedback is welcomed at any time. this will help the community
continuously
> know what the use cases/gaps are.
> 
> 
> On 28/09/2015 12:24 PM, Tim Bell wrote:
> > There seems to be a lot of overlap with the user survey which has just
> > finished.
> >
> > Feel free to get in touch on the user-commit...@lists.openstack.org if
> > you have questions to suggest to the survey or would like specific
> > queries to be run on the anonymised data.
> >
> > There is a significant risk of over surveying the operator community
> > and then we would lose all the valuable feedback.
> >
> > Tim
> >
> >> -Original Message-
> >> From: gord chung [mailto:g...@live.ca]
> >> Sent: 28 September 2015 17:18
> >> To: openstack-operators@lists.openstack.org
> >> Subject: [Openstack-operators] [ceilometer] OpenStack Telemetry user
> >> survey
> >>
> >> Hello,
> >>
> >> The OpenStack Telemetry (aka Ceilometer) team would like to collect
> >> feedback and information from its user base in order to drive future
> >> improvements to the project.  To do so, we have developed a survey.
> >> It should take about 15min to complete.
> >> Questions are fairly technical, so please ensure that you ask someone
> > within
> >> your organization that is hands on using Ceilometer.
> >>
> >>   https://goo.gl/rKNhM1
> >>
> >> On behalf of the Ceilometer community, we thank you for the time you
> >> will spend in helping us understand your needs.
> >>
> >> --
> >> Gordon Chung
> >> Ceilometer PTL
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato
> >> rs
> 
> --
> gord



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] "Master" keystone and "sub" keystone

2015-09-28 Thread Adam Young

On 09/26/2015 11:19 PM, RunnerCheng wrote:

Hi All,
I'm a newbie of keystone, and I'm doing some research about it 
recently. I have a question about how to deploy it. The scenario is on 
below:


One comany has one headquarter dc and 5 sub dc locate in different 
cities. We want to deploy separate OpenStack with "sub" keystone at 
the sub dc, and want to deploy one "master" keystone at headquarter 
dc. We want to manage all users, roles and tenants etc on the "master" 
keystone, however we want the end-user can authenticate with the "sub" 
keystone where he or she is locate.



Use LDAP for the users, don't keep them in Keystone.

Replicate roles, projects etc from master to sub.

Use Fernet tokens.  Replicate revocation events both ways.




Is anyone understant this scenario? How to realize it without 
additionaly development?


Thanks in advance!

Sam Cheng


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] "Master" keystone and "sub" keystone

2015-09-28 Thread Jonathan Proulx
On Mon, Sep 28, 2015 at 03:31:54PM -0400, Adam Young wrote:
:On 09/26/2015 11:19 PM, RunnerCheng wrote:
:>Hi All,
:>I'm a newbie of keystone, and I'm doing some research about it
:>recently. I have a question about how to deploy it. The scenario is
:>on below:
:>
:>One comany has one headquarter dc and 5 sub dc locate in different
:>cities. We want to deploy separate OpenStack with "sub" keystone at
:>the sub dc, and want to deploy one "master" keystone at headquarter
:>dc. We want to manage all users, roles and tenants etc on the
:>"master" keystone, however we want the end-user can authenticate
:>with the "sub" keystone where he or she is locate.
:
:
:Use LDAP for the users, don't keep them in Keystone.
:
:Replicate roles, projects etc from master to sub.
:
:Use Fernet tokens.  Replicate revocation events both ways.

I'm hearing conflicting advice about the suitibility of Fernet tokens
for production use.

I like the idea. I did get them to work in kilo trivially for CLI, but
Horizon was unhappy for reasons I didn't fully investigate as I heard
they 'weren't quite ready in kilo' so I defered further investigation
to next cycle.

Though honestly if you're building somthing new right now starting
with Liberty is probably the right thing anyway by the time you're
done PoC it will be released.

-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] "Master" keystone and "sub" keystone

2015-09-28 Thread Matt Fischer
On Mon, Sep 28, 2015 at 1:46 PM, Jonathan Proulx  wrote:

> On Mon, Sep 28, 2015 at 03:31:54PM -0400, Adam Young wrote:
> :On 09/26/2015 11:19 PM, RunnerCheng wrote:
> :>Hi All,
> :>I'm a newbie of keystone, and I'm doing some research about it
> :>recently. I have a question about how to deploy it. The scenario is
> :>on below:
> :>
> :>One comany has one headquarter dc and 5 sub dc locate in different
> :>cities. We want to deploy separate OpenStack with "sub" keystone at
> :>the sub dc, and want to deploy one "master" keystone at headquarter
> :>dc. We want to manage all users, roles and tenants etc on the
> :>"master" keystone, however we want the end-user can authenticate
> :>with the "sub" keystone where he or she is locate.
> :
> :
> :Use LDAP for the users, don't keep them in Keystone.
> :
> :Replicate roles, projects etc from master to sub.
> :
> :Use Fernet tokens.  Replicate revocation events both ways.
>
> I'm hearing conflicting advice about the suitibility of Fernet tokens
> for production use.
>
> I like the idea. I did get them to work in kilo trivially for CLI, but
> Horizon was unhappy for reasons I didn't fully investigate as I heard
> they 'weren't quite ready in kilo' so I defered further investigation
> to next cycle.
>
> Though honestly if you're building somthing new right now starting
> with Liberty is probably the right thing anyway by the time you're
> done PoC it will be released.
>
> -Jon


We're using them in prod, generally happy except with Validation
performance. For Horizon we run off of master anyway but you have to pull
in some code from Liberty or it won't work.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] "Master" keystone and "sub" keystone

2015-09-28 Thread Jonathan Proulx
On Mon, Sep 28, 2015 at 02:06:00PM -0600, Matt Fischer wrote:
:On Mon, Sep 28, 2015 at 1:46 PM, Jonathan Proulx  wrote:

:> I'm hearing conflicting advice about the suitibility of Fernet tokens
:> for production use.
:>
:> I like the idea. I did get them to work in kilo trivially for CLI, but
:> Horizon was unhappy for reasons I didn't fully investigate as I heard
:> they 'weren't quite ready in kilo' so I defered further investigation
:> to next cycle.
:>
:> Though honestly if you're building somthing new right now starting
:> with Liberty is probably the right thing anyway by the time you're
:> done PoC it will be released.
:>
:> -Jon
:
:
:We're using them in prod, generally happy except with Validation
:performance. For Horizon we run off of master anyway but you have to pull
:in some code from Liberty or it won't work.

Thanks that's actually very good to know.  I have a recent master
version of Horizon that I hope to move to production soon, so if it's
all Horizon issues I may make the move to Fernet in production sooner
rather than later.

-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Kilo NFV Performance Optimization advices on compute nodes

2015-09-28 Thread Pedro Sousa
Hi all,

I'm trying to deploy some nfv apps on my Openstack deployment, however I'm
having some performance issues in my VMs, that start to lose UDP packets at
a specific packet transmission rate.

 Here's what I've tried and found so far:

- VMs Centos 7.1 with 10GBe Neutron SR-IOV nics
- Configured Memory Hugepages:
http://redhatstackblog.redhat.com/tag/huge-pages/
- Configured CPU Pinning and NUMA Topology:
- Increased memory network buffers in kernel
- Running "egrep ens6 /proc/interrupts" I see network Interrupts are not
balanced evenly inside my guest across CPU cores, always hitting the same
CPU.

Concerning this last issue does anybody have some good advices on how to
tackle this, how can I share the network load across the vcpus inside the
guest or am I looking in the wrong direction?

Some pointers would be appreciated.

Regards,
Pedro Sousa
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Milti-site Keystone & Galera

2015-09-28 Thread Curtis
OK thanks for the input everyone, much appreciated.

Thanks,
Curtis.

On Mon, Sep 28, 2015 at 12:06 PM, Jay Pipes  wrote:
> On 09/28/2015 12:51 PM, Matt Fischer wrote:
>>
>> Yes. We have a separate DB cluster for global stuff like Keystone &
>> Designate, and a regional cluster for things like nova/neutron etc.
>
>
> Yep, this ^
>
> -jay
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Twitter: @serverascode
Blog: serverascode.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][osops] First contribution

2015-09-28 Thread David Wahlstrom
Thanks for the warm invite.  Hopefully my experience at
Dreamhost/DreamCompute will provide useful to this team!

On Mon, Sep 28, 2015 at 11:32 AM, JJ Asghar  wrote:

>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Hey everyone!
>
> I wanted to point out that we have had our first official contribution
> to OSOps today. [1][2] comes from David Wahlstrom, and I want to
> personally thank him for this.
>
> For the time being we are just looking for contributions, so if you have
> something that can fit please don't hesitate to commit!
>
> Everyone please take a moment for David to thank him for getting the
> ball rolling!
>
> [1]: https://review.openstack.org/228545
> [2]: https://review.openstack.org/228534
>
> - -JJ
>
> - --
> Best Regards,
> JJ Asghar
> c: 512.619.0722 t: @jjasghar irc: j^2
> -BEGIN PGP SIGNATURE-
> Version: GnuPG/MacGPG2 v2
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJWCYe2AAoJEDZbxzMH0+jTKwsP/1W33yZ9vzn7mzdH6029Es8U
> 6jcPlikyboQBAedVeU9k9vzO+Mh5fPXvuk88dkX40+07Rok2VN6gS1/q9Lqr85h9
> /xH3DzHoMF3bIoo3d54ULUzjfq08b6BrjBSGKd3rd13G4Dr81YAKwqCMHI/PPTS/
> bJHXUsjFvZOanicPt1ndw5tdQcd2Rm+kZeiWl4xY/zFDreAXTYtUqAlVPXy+ueLk
> LHg7DLMzA5Sztx0bW7NvmqN3ZftVuyXfSz53xtsXqh7CBgVrTXd7vT1luCI3Ytea
> ABymELKbW+fYlKZ4WURTFvMNOqtGg6YLS8vA3UuMA1MdUAXUVzO+xx46D/XBEdcY
> WPjmRDjYinJpJmeNepkj2rUJ39joNILdv70rt2vkf1uvzr0pGaZwmVydozy9EZ9T
> isGTJXWBwakrEn7tkt3VUBTofnOREtdeRA6aXdK5Y40suKzrexfxPRu5hHF/Bsei
> Khzrn1SX6b1yVzpnCJA/V3LU341P1CbDCKS/ljhHS4PJpS/pNCfQcEJ/00pywe8L
> Y9t+w9GFmDC/2zHD1N3ak2eVmZ6nfrzlW/P0oi/l3itsuo4A90o/Yh52TsW6gn2q
> Xo5d7SUILhl9JxfdyY63hFvCbPqlHGTelVhcZNpwUwbo2ri0xE1w19C/nODtlntA
> Gpu6jIZPtsbi7bAYdB/+
> =J+RD
> -END PGP SIGNATURE-
>
>


-- 
David W.
Unix, because every barista in Seattle has an MCSE.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Live snapshots on the raw disks never ends

2015-09-28 Thread David Wahlstrom
George,

What is your storage backend using (Gluster/ceph/local disk/etc)?  Some of
the distributed backend drivers have bugs in them or mask the real issue
(such as watchers on objects).

On Thu, Sep 24, 2015 at 8:11 AM, Kris G. Lindgren 
wrote:

> I believe I was talking to Josh Harlow (he's harlowja in
> #openstack-operators on freenode) from Yahoo, about something like this the
> other day.  He was saying that recently on a few hypervisors they would
> randomly run into HV disks that were completely full due to snapshots.  I
> have not personally ran into this, so I can't be of more help.
>
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
>
>
>
>
> On 9/24/15, 7:02 AM, "George Shuklin"  wrote:
>
> >Hello everyone.
> >
> >Is someone ever saw 'endless snapshot' problem? Some instances (with raw
> >disks and live snapshoting enabled) are stuck at image_uploading forever.
> >
> >It looks like this:
> >
>
> >+--+--+
> >| Property | Value
> |
>
> >+--+--+
> >| status   | ACTIVE
>  |
> >| updated  | 2015-07-16T08:07:00Z
>  |
> >| OS-EXT-STS:task_state| image_uploading
> |
> >| OS-EXT-SRV-ATTR:host | compute
> |
> >| key_name | ses
> |
> >| image| Ubuntu 14.04
> (3736af94-b25e-4b8d-96fd-fd5949bbd81e)  |
> >| OS-EXT-STS:vm_state  | active
>  |
> >| OS-EXT-SRV-ATTR:instance_name| instance-000d
> |
> >| OS-SRV-USG:launched_at   | 2015-05-09T17:28:09.00
>  |
> >| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute.lab.internal
>  |
> >| flavor   | flavor2 (2)
> |
> >| id   |
> f2365fe4-9b30-4c24-b7b9-f7fcb4165160 |
> >| security_groups  | [{u'name': u'default'}]
> |
> >| OS-SRV-USG:terminated_at | None
>  |
> >| user_id  |
> 61096c639d674e4cb8bf487cec01432a |
> >| name | non-test
>  |
> >| created  | 2015-05-09T17:27:48Z
>  |
> >...etc
> >
> >Any ideas why this happens? All logs are clear, no errors or anything.
> >And it happens at random so no 'debug' log available...
> >
> >___
> >OpenStack-operators mailing list
> >OpenStack-operators@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
David W.
Unix, because every barista in Seattle has an MCSE.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-dev] [cinder] [all] The future of Cinder API v1

2015-09-28 Thread Ivan Kolodyazhny
Hi all,

As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was
introduced in Grizzly and v1 API is deprecated since Juno.

After [1] is merged, Cinder API v1 is disabled in gates by default. We've
got a filed bug [2] to remove Cinder v1 API at all.


According to Deprecation Policy [3] looks like we are OK to remote it. But
I would like to ask Cinder API users if any still use API v1.
Should we remove it at all Mitaka release or just disable by default in the
cinder.conf?

AFAIR, only Rally doesn't support API v2 now and I'm going to implement it
asap.

[1] https://review.openstack.org/194726
[2] https://bugs.launchpad.net/cinder/+bug/1467589
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html

Regards,
Ivan Kolodyazhny
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ceilometer] OpenStack Telemetry user survey

2015-09-28 Thread Kruithof, Piet
Hi Tim,

Thanks for addressing some of the potential challenges around conducting
surveys on behalf of the community.

The research conducted by the OpenStack UX project, including surveys,
tend to focus on specific project needs rather than the overall direction
of the industry, which is captured by the bi-annual survey. The project
surveys generally require a fair amount of detail in order to enable the
project teams to make decision around product direction. As a result,
adding a few questions to the foundation¹s survey doesn¹t generate the
level of detail typically needed by each project. Also, any questions
added to the bi-annual survey are at the discretion of the user committee,
which is a concern for me.

In addition, waiting for a survey every six months would not allow us to
be responsive to the project research needs.

I agree with your concern with population fatigue and we would prefer to
conduct studies with a more focused sample rather than the overall
OpenStack community. For example, we may specifically focus on network
admins during one survey while focusing on other roles during another
study. In those cases, user/operators
have generally been willing to participate because the results should have
a tangible impact on their daily activities.

One recommendation would be to distribute a screener to the overall
community to identify the specific skills and focus of its members. The
user committee and project
teams could use the database created from the screener to identify
potential participants for the various research activities. The goal would
be to be more focused on how
we recruit participants rather than rolling out to the entire community.
It would also allow us to track and limit how often members are being
invited to participate in studies to avoid population fatigue. The
screener would also respondents to opt of being recruited for research
activities.

We¹ve asked in the past, but the user committee has not been unable to
provide anonymized data because of a policy within the foundation that
limits access to data to a handful of users. I don¹t dispute the need for
the policy, but summary statistics aren¹t helpful for conducting
statistical analysis. More recently, we¹ve asked for the raw data from the
operator job analysis survey because of its value in helping to drive
persona development, but have yet to hear back from the foundation.

Piet




Piet Kruithof
Sr UX Architect, HP Helion Cloud
PTL, OpenStack UX project


"For every complex problem, there is a solution that is simple, neat and
wrong.²

H L Menken





On 9/28/15, 12:42 PM, "Tim Bell"  wrote:

>I have started a thread on user-commit...@lists.openstack.org so we can
>try
>to find the right balance between ensuring the surveys are completed and
>meeting the needs for more detailed information for the projects.
>
>Tim
>
>> -Original Message-
>> From: gord chung [mailto:g...@live.ca]
>> Sent: 28 September 2015 20:31
>> To: Tim Bell ; openstack-operators@lists.openstack.org
>> Subject: Re: [Openstack-operators] [ceilometer] OpenStack Telemetry user
>> survey
>> 
>> hi Tim,
>> 
>> it's not our intention to over-survey the community -- i apologise if
>>this
>is
>> your takeaway. the user survey that just finished was aimed to gather
>> information regarding general OpenStack practices (one question being
>> assigned to each project).
>> 
>> the idea here is to have a dialogue between user and developers
>specifically
>> regarding Ceilometer. creating this survey, the goal is to gather
>>specific
>> information regarding each of the components of Ceilometer so we know to
>> 'work on component xyz of Ceilometer' rather than 'work on Ceilometer'.
>> 
>> please have a look at the survey at your own convenience and interest --
>> feedback is welcomed at any time. this will help the community
>continuously
>> know what the use cases/gaps are.
>> 
>> 
>> On 28/09/2015 12:24 PM, Tim Bell wrote:
>> > There seems to be a lot of overlap with the user survey which has just
>> > finished.
>> >
>> > Feel free to get in touch on the user-commit...@lists.openstack.org if
>> > you have questions to suggest to the survey or would like specific
>> > queries to be run on the anonymised data.
>> >
>> > There is a significant risk of over surveying the operator community
>> > and then we would lose all the valuable feedback.
>> >
>> > Tim
>> >
>> >> -Original Message-
>> >> From: gord chung [mailto:g...@live.ca]
>> >> Sent: 28 September 2015 17:18
>> >> To: openstack-operators@lists.openstack.org
>> >> Subject: [Openstack-operators] [ceilometer] OpenStack Telemetry user
>> >> survey
>> >>
>> >> Hello,
>> >>
>> >> The OpenStack Telemetry (aka Ceilometer) team would like to collect
>> >> feedback and information from its user base in order to drive future
>> >> improvements to the project.  To do so, we have developed a survey.
>> >> It should take about 15min to complete.
>> >> Questions are fairly technical, so please ensure t

Re: [Openstack-operators] [openstack-dev] [cinder] [all] The future of Cinder API v1

2015-09-28 Thread Matt Fischer
Yes, people are probably still using it. Last time I tried to use V2 it
didn't work because the clients were broken, and then it went back on the
bottom of my to do list. Is this mess fixed?

http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html

On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny  wrote:

> Hi all,
>
> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was
> introduced in Grizzly and v1 API is deprecated since Juno.
>
> After [1] is merged, Cinder API v1 is disabled in gates by default. We've
> got a filed bug [2] to remove Cinder v1 API at all.
>
>
> According to Deprecation Policy [3] looks like we are OK to remote it. But
> I would like to ask Cinder API users if any still use API v1.
> Should we remove it at all Mitaka release or just disable by default in
> the cinder.conf?
>
> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it
> asap.
>
> [1] https://review.openstack.org/194726
> [2] https://bugs.launchpad.net/cinder/+bug/1467589
> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>
> Regards,
> Ivan Kolodyazhny
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [cinder] [all] The future of Cinder API v1

2015-09-28 Thread Sam Morrison
Yeah we’re still using v1 as the clients that are packaged with most distros 
don’t support v2 easily.

Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our “volume” 
endpoint to point to v2 (we have a volumev2 endpoint too) and the client breaks.

$ cinder list
ERROR: OpenStack Block Storage API version is set to 1 but you are accessing a 
2 endpoint. Change its value through --os-volume-api-version or 
env[OS_VOLUME_API_VERSION].

Sam


> On 29 Sep 2015, at 8:34 am, Matt Fischer  wrote:
> 
> Yes, people are probably still using it. Last time I tried to use V2 it 
> didn't work because the clients were broken, and then it went back on the 
> bottom of my to do list. Is this mess fixed?
> 
> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
>  
> 
> 
> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny  > wrote:
> Hi all,
> 
> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was 
> introduced in Grizzly and v1 API is deprecated since Juno.
> 
> After [1] is merged, Cinder API v1 is disabled in gates by default. We've got 
> a filed bug [2] to remove Cinder v1 API at all.
> 
> 
> According to Deprecation Policy [3] looks like we are OK to remote it. But I 
> would like to ask Cinder API users if any still use API v1.
> Should we remove it at all Mitaka release or just disable by default in the 
> cinder.conf?
> 
> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it 
> asap.
> 
> [1] https://review.openstack.org/194726  
> [2] https://bugs.launchpad.net/cinder/+bug/1467589 
> 
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html 
> 
> Regards,
> Ivan Kolodyazhny
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Live snapshots on the raw disks never ends

2015-09-28 Thread Joshua Harlow

Ours was local-disk,

I believe https://review.openstack.org/#/c/208078/ (and/or its followup 
bug/fix) will hopefully help address this. It might not be the same 
issue though (but maybe it is).


-Josh

Kris G. Lindgren wrote:

I believe I was talking to Josh Harlow (he's harlowja in #openstack-operators 
on freenode) from Yahoo, about something like this the other day.  He was 
saying that recently on a few hypervisors they would randomly run into HV disks 
that were completely full due to snapshots.  I have not personally ran into 
this, so I can't be of more help.

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy





On 9/24/15, 7:02 AM, "George Shuklin"  wrote:


Hello everyone.

Is someone ever saw 'endless snapshot' problem? Some instances (with raw
disks and live snapshoting enabled) are stuck at image_uploading forever.

It looks like this:

+--+--+
| Property | Value  
  |
+--+--+
| status   | ACTIVE 
  |
| updated  | 2015-07-16T08:07:00Z   
  |
| OS-EXT-STS:task_state| image_uploading
  |
| OS-EXT-SRV-ATTR:host | compute
  |
| key_name | ses
  |
| image| Ubuntu 14.04 
(3736af94-b25e-4b8d-96fd-fd5949bbd81e)  |
| OS-EXT-STS:vm_state  | active 
  |
| OS-EXT-SRV-ATTR:instance_name| instance-000d  
  |
| OS-SRV-USG:launched_at   | 2015-05-09T17:28:09.00 
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute.lab.internal   
  |
| flavor   | flavor2 (2)
  |
| id   | f2365fe4-9b30-4c24-b7b9-f7fcb4165160   
  |
| security_groups  | [{u'name': u'default'}]
  |
| OS-SRV-USG:terminated_at | None   
  |
| user_id  | 61096c639d674e4cb8bf487cec01432a   
  |
| name | non-test   
  |
| created  | 2015-05-09T17:27:48Z   
  |
...etc

Any ideas why this happens? All logs are clear, no errors or anything.
And it happens at random so no 'debug' log available...

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [cinder] [all] The future of Cinder API v1

2015-09-28 Thread Mark Voelker
FWIW, the most popular client libraries in the last user survey[1] other than 
OpenStack’s own clients were: libcloud (48 respondents), jClouds (36 
respondents), Fog (34 respondents), php-opencloud (21 respondents), DeltaCloud 
(which has been retired by Apache and hasn’t seen a commit in two years, but 17 
respondents are still using it), pkgcloud (15 respondents), and OpenStack.NET 
(14 respondents).  Of those:

* libcloud appears to support the nova-volume API but not the cinder API: 
https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251

* jClouds appears to support only the v1 API: 
https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds

* Fog also appears to only support the v1 API: 
https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99

* php-opencloud appears to only support the v1 API: 
https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html

* DeltaCloud I honestly haven’t looked at since it’s thoroughly dead, but I 
can’t imagine it supports v2.

* pkgcloud has beta-level support for Cinder but I think it’s v1 (may be 
mistaken): https://github.com/pkgcloud/pkgcloud/#block-storagebeta and 
https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage

* OpenStack.NET does appear to support v2: 
http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm

Now, it’s anyone’s guess as to whether or not users of those client libraries 
actually try to use them for volume operations or not (anecdotally I know a few 
clouds I help support are using client libraries that only support v1), and 
some users might well be using more than one library or mixing in code they 
wrote themselves.  But most of the above that support cinder do seem to rely on 
v1.  Some management tools also appear to still rely on the v1 API (such as 
RightScale: 
http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html ).  
From that perspective it might be useful to keep it around a while longer and 
disable it by default.  Personally I’d probably lean that way, especially given 
that folks here on the ops list are still reporting problems too.

That said, v1 has been deprecated since Juno, and the Juno release notes said 
it was going to be removed [2], so there’s a case to be made that there’s been 
plenty of fair warning too I suppose.

[1] 
http://superuser.openstack.org/articles/openstack-application-developers-share-insights
[2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7

At Your Service,

Mark T. Voelker



> On Sep 28, 2015, at 7:17 PM, Sam Morrison  wrote:
> 
> Yeah we’re still using v1 as the clients that are packaged with most distros 
> don’t support v2 easily.
> 
> Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our “volume” 
> endpoint to point to v2 (we have a volumev2 endpoint too) and the client 
> breaks.
> 
> $ cinder list
> ERROR: OpenStack Block Storage API version is set to 1 but you are accessing 
> a 2 endpoint. Change its value through --os-volume-api-version or 
> env[OS_VOLUME_API_VERSION].
> 
> Sam
> 
> 
>> On 29 Sep 2015, at 8:34 am, Matt Fischer  wrote:
>> 
>> Yes, people are probably still using it. Last time I tried to use V2 it 
>> didn't work because the clients were broken, and then it went back on the 
>> bottom of my to do list. Is this mess fixed?
>> 
>> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
>> 
>> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny  wrote:
>> Hi all,
>> 
>> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was 
>> introduced in Grizzly and v1 API is deprecated since Juno.
>> 
>> After [1] is merged, Cinder API v1 is disabled in gates by default. We've 
>> got a filed bug [2] to remove Cinder v1 API at all.
>> 
>> 
>> According to Deprecation Policy [3] looks like we are OK to remote it. But I 
>> would like to ask Cinder API users if any still use API v1.
>> Should we remove it at all Mitaka release or just disable by default in the 
>> cinder.conf?
>> 
>> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it 
>> asap.
>> 
>> [1] https://review.openstack.org/194726 
>> [2] https://bugs.launchpad.net/cinder/+bug/1467589
>> [3] 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>> 
>> Regards,
>> Ivan Kolodyazhny
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> __
> OpenStack Developmen

[Openstack-operators] [neutron] [nova] Nova Network/Neutron Migration Survey - need response from folks currently using Nova Networks in their deployments

2015-09-28 Thread Kruithof, Piet
There has been a significant response to the Nova Network/Neutron migration 
survey.  However, the responses are leaning heavily on the side of deployments 
currently using Neutron.  As a result, we would like to have more 
representation from folks currently using Nova Networks.

If you are currently using Nova Networks, please respond to the survey!

You will also be entered in a raffle for one of two $100 US Amazon gift cards 
at the end of the survey.   As always, the results from the survey will be 
shared with the OpenStack community.

Please click on the following link to begin the survey.

https://www.surveymonkey.com/r/osnetworking


Piet Kruithof
PTL, OpenStack UX project

"For every complex problem, there is a solution that is simple, neat and wrong.”

H L Menken


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] "Master" keystone and "sub"

2015-09-28 Thread RunnerCheng
Hi All,
 
Really thanks all of you disscussing this topic, I believe I got several 
important clues from your disscussion.
--

Message: 11
Date: Mon, 28 Sep 2015 15:31:54 -0400
From: Adam Young 
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] "Master" keystone and "sub"
keystone
Message-ID: <560995aa.5040...@redhat.com>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On 09/26/2015 11:19 PM, RunnerCheng wrote:
> Hi All,
> I'm a newbie of keystone, and I'm doing some research about it 
> recently. I have a question about how to deploy it. The scenario is on 
> below:
>
> One comany has one headquarter dc and 5 sub dc locate in different 
> cities. We want to deploy separate OpenStack with "sub" keystone at 
> the sub dc, and want to deploy one "master" keystone at headquarter 
> dc. We want to manage all users, roles and tenants etc on the "master" 
> keystone, however we want the end-user can authenticate with the "sub" 
> keystone where he or she is locate.


Use LDAP for the users, don't keep them in Keystone.

Replicate roles, projects etc from master to sub.

Use Fernet tokens.  Replicate revocation events both ways.


>
> Is anyone understant this scenario? How to realize it without 
> additionaly development?
>
> Thanks in advance!
>
> Sam Cheng
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openstack.org/pipermail/openstack-operators/attachments/20150928/7a0c57c4/attachment-0001.html>

--

Message: 12
Date: Mon, 28 Sep 2015 15:46:50 -0400
From: Jonathan Proulx 
To: Adam Young 
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] "Master" keystone and "sub"
keystone
Message-ID: <20150928194650.go24...@csail.mit.edu>
Content-Type: text/plain; charset=us-ascii

On Mon, Sep 28, 2015 at 03:31:54PM -0400, Adam Young wrote:
:On 09/26/2015 11:19 PM, RunnerCheng wrote:
:>Hi All,
:>I'm a newbie of keystone, and I'm doing some research about it
:>recently. I have a question about how to deploy it. The scenario is
:>on below:
:>
:>One comany has one headquarter dc and 5 sub dc locate in different
:>cities. We want to deploy separate OpenStack with "sub" keystone at
:>the sub dc, and want to deploy one "master" keystone at headquarter
:>dc. We want to manage all users, roles and tenants etc on the
:>"master" keystone, however we want the end-user can authenticate
:>with the "sub" keystone where he or she is locate.
:
:
:Use LDAP for the users, don't keep them in Keystone.
:
:Replicate roles, projects etc from master to sub.
:
:Use Fernet tokens.  Replicate revocation events both ways.

I'm hearing conflicting advice about the suitibility of Fernet tokens
for production use.

I like the idea. I did get them to work in kilo trivially for CLI, but
Horizon was unhappy for reasons I didn't fully investigate as I heard
they 'weren't quite ready in kilo' so I defered further investigation
to next cycle.

Though honestly if you're building somthing new right now starting
with Liberty is probably the right thing anyway by the time you're
done PoC it will be released.

-Jon



--

Message: 13
Date: Mon, 28 Sep 2015 14:06:00 -0600
From: Matt Fischer 
To: Jonathan Proulx 
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] "Master" keystone and "sub"
keystone
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Mon, Sep 28, 2015 at 1:46 PM, Jonathan Proulx  wrote:

> On Mon, Sep 28, 2015 at 03:31:54PM -0400, Adam Young wrote:
> :On 09/26/2015 11:19 PM, RunnerCheng wrote:
> :>Hi All,
> :>I'm a newbie of keystone, and I'm doing some research about it
> :>recently. I have a question about how to deploy it. The scenario is
> :>on below:
> :>
> :>One comany has one headquarter dc and 5 sub dc locate in different
> :>cities. We want to deploy separate OpenStack with "sub" keystone at
> :>the sub dc, and want to deploy one "master" keystone at headquarter
> :>dc. We want to manage all users, roles and tenants etc on the
> :>"master" keystone, however we want the end-user can authenticate
> :>with the "sub" keystone where he or she is locate.
> :
> :
> :Use LDAP for the users, don&#

Re: [Openstack-operators] [Large Deployments Team][Tags] Ops Tag for "Scale"

2015-09-28 Thread Shamail Tahir
Thanks for the additional data-points on the granularity that we might need
to address scale (component rather than project level) along with
additional clarification on drivers (scale equivalent to resource being
available versus needing additional resources).  As mentioned earlier,
deployment options (types of infrastructure, services, and their
configuration) also play a vital part in the 'scale' of a service.  This is
a great starting point to the conversation.

On Mon, Sep 28, 2015 at 10:03 AM, Jonathan D. Proulx 
wrote:

> On Sun, Sep 27, 2015 at 06:15:58PM +, Kris G. Lindgren wrote:
>
> >For Example, we (Godaddy) use neutron for networking, however
> >we do not use tunneling of any type and we do not create virtual
> >routers or private networks and we do not rely on dhcp (we use
> >config-drive to set instance ip configuration).
>
> This is an important thing to consider.  Even at my <100 node scale I
> don't use neutron L3 so even if a site reports using a project it's
> hard to say what parts tehy are using or how they modefied it to mak
> eit 'scale' as this could include leaving large pieces that don't
> scale on the floor...
>
> -Jon
>
>
>
>


-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators