Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-24 Thread Erik McCormick
On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian <
florian.engelm...@everyware.ch> wrote:

> On the network nodes we've got a dedicated interface to deploy VLANs (like
> the provider network for internet access). What about creating another VLAN
> on the network nodes, give that bridge a IP which is part of the subnet of
> lb-mgmt-net and start the octavia worker, healthmanager and controller on
> the network nodes binding to that IP?
>
> The problem with that is you can't out an IP in the vlan interface and
also use it as an OVS bridge, so the Octavia processes would have nothing
to bind to.

>
> --
> *From:* Erik McCormick 
> *Sent:* Wednesday, October 24, 2018 6:18 PM
> *To:* Engelmann Florian
> *Cc:* openstack-operators
> *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN
> without DVR
>
>
>
> On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann <
> florian.engelm...@everyware.ch> wrote:
>
>> Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>> >
>> >
>> > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>> > mailto:florian.engelm...@everyware.ch>>
>>
>> > wrote:
>> >
>> > Ohoh - thank you for your empathy :)
>> > And those great details about how to setup this mgmt network.
>> > I will try to do so this afternoon but solving that routing "puzzle"
>> > (virtual network to control nodes) I will need our network guys to
>> help
>> > me out...
>> >
>> > But I will need to tell all Amphorae a static route to the gateway
>> that
>> > is routing to the control nodes?
>> >
>> >
>> > Just set the default gateway when you create the neutron subnet. No
>> need
>> > for excess static routes. The route on the other connection won't
>> > interfere with it as it lives in a namespace.
>>
>>
>> My compute nodes have no br-ex and there is no L2 domain spread over all
>> compute nodes. As far as I understood lb-mgmt-net is a provider network
>> and has to be flat or VLAN and will need a "physical" gateway (as there
>> is no virtual router).
>> So the question - is it possible to get octavia up and running without a
>> br-ex (L2 domain spread over all compute nodes) on the compute nodes?
>>
>
> Sorry, I only meant it was *like* br-ex on your network nodes. You don't
> need that on your computes.
>
> The router here would be whatever does routing in your physical network.
> Setting the gateway in the neutron subnet simply adds that to the DHCP
> information sent to the amphorae.
>
> This does bring up another thingI forgot  though. You'll probably want to
> add the management network / bridge to your network nodes or wherever you
> run the DHCP agents. When you create the subnet, be sure to leave some
> space in the address scope for the physical devices with static IPs.
>
> As for multiple L2 domains, I can't think of a way to go about that for
> the lb-mgmt network. It's a single network with a single subnet. Perhaps
> you could limit load balancers to an AZ in a single rack? Seems not very HA
> friendly.
>
>>
>>
> >
>> >
>> >
>> > Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>> >  > So in your other email you said asked if there was a guide for
>> >  > deploying it with Kolla ansible...
>> >  >
>> >  > Oh boy. No there's not. I don't know if you've seen my recent
>> > mails on
>> >  > Octavia, but I am going through this deployment process with
>> >  > kolla-ansible right now and it is lacking in a few areas.
>> >  >
>> >  > If you plan to use different CA certificates for client and
>> server in
>> >  > Octavia, you'll need to add that into the playbook. Presently it
>> only
>> >  > copies over ca_01.pem, cacert.key, and client.pem and uses them
>> for
>> >  > everything. I was completely unable to make it work with only
>> one CA
>> >  > as I got some SSL errors. It passes gate though, so I aasume it
>> must
>> >  > work? I dunno.
>> >  >
>> >  > Networking comments and a really messy kolla-ansible / octavia
>> > how-to below...
>> >  >
>> >  > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>> >  > > > <mailto:florian.engelm...@everyware.ch>> wrote:
>> >  >>
>> >  >&g

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-24 Thread Erik McCormick
On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann <
florian.engelm...@everyware.ch> wrote:

> Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
> >
> >
> > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
> > mailto:florian.engelm...@everyware.ch>>
>
> > wrote:
> >
> > Ohoh - thank you for your empathy :)
> > And those great details about how to setup this mgmt network.
> > I will try to do so this afternoon but solving that routing "puzzle"
> > (virtual network to control nodes) I will need our network guys to
> help
> > me out...
> >
> > But I will need to tell all Amphorae a static route to the gateway
> that
> > is routing to the control nodes?
> >
> >
> > Just set the default gateway when you create the neutron subnet. No need
> > for excess static routes. The route on the other connection won't
> > interfere with it as it lives in a namespace.
>
>
> My compute nodes have no br-ex and there is no L2 domain spread over all
> compute nodes. As far as I understood lb-mgmt-net is a provider network
> and has to be flat or VLAN and will need a "physical" gateway (as there
> is no virtual router).
> So the question - is it possible to get octavia up and running without a
> br-ex (L2 domain spread over all compute nodes) on the compute nodes?
>

Sorry, I only meant it was *like* br-ex on your network nodes. You don't
need that on your computes.

The router here would be whatever does routing in your physical network.
Setting the gateway in the neutron subnet simply adds that to the DHCP
information sent to the amphorae.

This does bring up another thingI forgot  though. You'll probably want to
add the management network / bridge to your network nodes or wherever you
run the DHCP agents. When you create the subnet, be sure to leave some
space in the address scope for the physical devices with static IPs.

As for multiple L2 domains, I can't think of a way to go about that for the
lb-mgmt network. It's a single network with a single subnet. Perhaps you
could limit load balancers to an AZ in a single rack? Seems not very HA
friendly.

>
>
>
> >
> >
> > Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
> >  > So in your other email you said asked if there was a guide for
> >  > deploying it with Kolla ansible...
> >  >
> >  > Oh boy. No there's not. I don't know if you've seen my recent
> > mails on
> >  > Octavia, but I am going through this deployment process with
> >  > kolla-ansible right now and it is lacking in a few areas.
> >  >
> >  > If you plan to use different CA certificates for client and
> server in
> >  > Octavia, you'll need to add that into the playbook. Presently it
> only
> >  > copies over ca_01.pem, cacert.key, and client.pem and uses them
> for
> >  > everything. I was completely unable to make it work with only one
> CA
> >  > as I got some SSL errors. It passes gate though, so I aasume it
> must
> >  > work? I dunno.
> >  >
> >  > Networking comments and a really messy kolla-ansible / octavia
> > how-to below...
> >  >
> >  > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> >  >  > <mailto:florian.engelm...@everyware.ch>> wrote:
> >  >>
> >  >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> >  >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> >  >>>  > <mailto:florian.engelm...@everyware.ch>> wrote:
> >  >>>>
> >  >>>> Hi,
> >  >>>>
> >  >>>> We did test Octavia with Pike (DVR deployment) and everything
> was
> >  >>>> working right our of the box. We changed our underlay network
> to a
> >  >>>> Layer3 spine-leaf network now and did not deploy DVR as we
> > don't wanted
> >  >>>> to have that much cables in a rack.
> >  >>>>
> >  >>>> Octavia is not working right now as the lb-mgmt-net does not
> > exist on
> >  >>>> the compute nodes nor does a br-ex.
> >  >>>>
> >  >>>> The control nodes running
> >  >>>>
> >  >>>> octavia_worker
> >  >>>> octavia_housekeeping
> >  >>>> octavia_health_manager
> >  >>>> octavia_api
> >  >>>>
> > Amphorae-VMs, z.b.

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-24 Thread Erik McCormick
On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann <
florian.engelm...@everyware.ch> wrote:

> Ohoh - thank you for your empathy :)
> And those great details about how to setup this mgmt network.
> I will try to do so this afternoon but solving that routing "puzzle"
> (virtual network to control nodes) I will need our network guys to help
> me out...
>
> But I will need to tell all Amphorae a static route to the gateway that
> is routing to the control nodes?
>

Just set the default gateway when you create the neutron subnet. No need
for excess static routes. The route on the other connection won't interfere
with it as it lives in a namespace.


>
> Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
> > So in your other email you said asked if there was a guide for
> > deploying it with Kolla ansible...
> >
> > Oh boy. No there's not. I don't know if you've seen my recent mails on
> > Octavia, but I am going through this deployment process with
> > kolla-ansible right now and it is lacking in a few areas.
> >
> > If you plan to use different CA certificates for client and server in
> > Octavia, you'll need to add that into the playbook. Presently it only
> > copies over ca_01.pem, cacert.key, and client.pem and uses them for
> > everything. I was completely unable to make it work with only one CA
> > as I got some SSL errors. It passes gate though, so I aasume it must
> > work? I dunno.
> >
> > Networking comments and a really messy kolla-ansible / octavia how-to
> below...
> >
> > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> >  wrote:
> >>
> >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> >>>  wrote:
> >>>>
> >>>> Hi,
> >>>>
> >>>> We did test Octavia with Pike (DVR deployment) and everything was
> >>>> working right our of the box. We changed our underlay network to a
> >>>> Layer3 spine-leaf network now and did not deploy DVR as we don't
> wanted
> >>>> to have that much cables in a rack.
> >>>>
> >>>> Octavia is not working right now as the lb-mgmt-net does not exist on
> >>>> the compute nodes nor does a br-ex.
> >>>>
> >>>> The control nodes running
> >>>>
> >>>> octavia_worker
> >>>> octavia_housekeeping
> >>>> octavia_health_manager
> >>>> octavia_api
> >>>>
> Amphorae-VMs, z.b.
>
> lb-mgmt-net 172.16.0.0/16 default GW
> >>>> and as far as I understood octavia_worker, octavia_housekeeping and
> >>>> octavia_health_manager have to talk to the amphora instances. But the
> >>>> control nodes are spread over three different leafs. So each control
> >>>> node in a different L2 domain.
> >>>>
> >>>> So the question is how to deploy a lb-mgmt-net network in our setup?
> >>>>
> >>>> - Compute nodes have no "stretched" L2 domain
> >>>> - Control nodes, compute nodes and network nodes are in L3 networks
> like
> >>>> api, storage, ...
> >>>> - Only network nodes are connected to a L2 domain (with a separated
> NIC)
> >>>> providing the "public" network
> >>>>
> >>> You'll need to add a new bridge to your compute nodes and create a
> >>> provider network associated with that bridge. In my setup this is
> >>> simply a flat network tied to a tagged interface. In your case it
> >>> probably makes more sense to make a new VNI and create a vxlan
> >>> provider network. The routing in your switches should handle the rest.
> >>
> >> Ok that's what I try right now. But I don't get how to setup something
> >> like a VxLAN provider Network. I thought only vlan and flat is supported
> >> as provider network? I guess it is not possible to use the tunnel
> >> interface that is used for tenant networks?
> >> So I have to create a separated VxLAN on the control and compute nodes
> like:
> >>
> >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
> >> dev vlan3535 ttl 5
> >> # ip addr add 172.16.1.11/20 dev vxoctavia
> >> # ip link set vxoctavia up
> >>
> >> and use it like a flat provider network, true?
> >>
> > This is a fine way of doing things, but it's only half the battle.
> > You'll need to add a bridge on the compute nodes and bind it to th

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-23 Thread Erik McCormick
So in your other email you said asked if there was a guide for
deploying it with Kolla ansible...

Oh boy. No there's not. I don't know if you've seen my recent mails on
Octavia, but I am going through this deployment process with
kolla-ansible right now and it is lacking in a few areas.

If you plan to use different CA certificates for client and server in
Octavia, you'll need to add that into the playbook. Presently it only
copies over ca_01.pem, cacert.key, and client.pem and uses them for
everything. I was completely unable to make it work with only one CA
as I got some SSL errors. It passes gate though, so I aasume it must
work? I dunno.

Networking comments and a really messy kolla-ansible / octavia how-to below...

On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
 wrote:
>
> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> >  wrote:
> >>
> >> Hi,
> >>
> >> We did test Octavia with Pike (DVR deployment) and everything was
> >> working right our of the box. We changed our underlay network to a
> >> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
> >> to have that much cables in a rack.
> >>
> >> Octavia is not working right now as the lb-mgmt-net does not exist on
> >> the compute nodes nor does a br-ex.
> >>
> >> The control nodes running
> >>
> >> octavia_worker
> >> octavia_housekeeping
> >> octavia_health_manager
> >> octavia_api
> >>
> >> and as far as I understood octavia_worker, octavia_housekeeping and
> >> octavia_health_manager have to talk to the amphora instances. But the
> >> control nodes are spread over three different leafs. So each control
> >> node in a different L2 domain.
> >>
> >> So the question is how to deploy a lb-mgmt-net network in our setup?
> >>
> >> - Compute nodes have no "stretched" L2 domain
> >> - Control nodes, compute nodes and network nodes are in L3 networks like
> >> api, storage, ...
> >> - Only network nodes are connected to a L2 domain (with a separated NIC)
> >> providing the "public" network
> >>
> > You'll need to add a new bridge to your compute nodes and create a
> > provider network associated with that bridge. In my setup this is
> > simply a flat network tied to a tagged interface. In your case it
> > probably makes more sense to make a new VNI and create a vxlan
> > provider network. The routing in your switches should handle the rest.
>
> Ok that's what I try right now. But I don't get how to setup something
> like a VxLAN provider Network. I thought only vlan and flat is supported
> as provider network? I guess it is not possible to use the tunnel
> interface that is used for tenant networks?
> So I have to create a separated VxLAN on the control and compute nodes like:
>
> # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
> dev vlan3535 ttl 5
> # ip addr add 172.16.1.11/20 dev vxoctavia
> # ip link set vxoctavia up
>
> and use it like a flat provider network, true?
>
This is a fine way of doing things, but it's only half the battle.
You'll need to add a bridge on the compute nodes and bind it to that
new interface. Something like this if you're using openvswitch:

docker exec openvswitch_db
/usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia

Also you'll want to remove the IP address from that interface as it's
going to be a bridge. Think of it like your public (br-ex) interface
on your network nodes.

From there you'll need to update the bridge mappings via kolla
overrides. This would usually be in /etc/kolla/config/neutron. Create
a subdirectory for your compute inventory group and create an
ml2_conf.ini there. So you'd end up with something like:

[root@kolla-deploy ~]# cat /etc/kolla/config/neutron/compute/ml2_conf.ini
[ml2_type_flat]
flat_networks = mgmt-net

[ovs]
bridge_mappings = mgmt-net:br-mgmt

run kolla-ansible --tags neutron reconfigure to push out the new
configs. Note that there is a bug where the neutron containers may not
restart after the change, so you'll probably need to do a 'docker
container restart neutron_openvswitch_agent' on each compute node.

At this point, you'll need to create the provider network in the admin
project like:

openstack network create --provider-network-type flat
--provider-physical-network mgmt-net lb-mgmt-net

And then create a normal subnet attached to this network with some
largeish address scope. I wouldn't use 172.16.0.0/16 because docker
uses that by default. I'm not sure if it matters since the network
traffic will be isolated on a bridge, but it makes me paranoid so I
avoided it.

For your control

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-23 Thread Erik McCormick
On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
 wrote:
>
> Hi,
>
> We did test Octavia with Pike (DVR deployment) and everything was
> working right our of the box. We changed our underlay network to a
> Layer3 spine-leaf network now and did not deploy DVR as we don't wanted
> to have that much cables in a rack.
>
> Octavia is not working right now as the lb-mgmt-net does not exist on
> the compute nodes nor does a br-ex.
>
> The control nodes running
>
> octavia_worker
> octavia_housekeeping
> octavia_health_manager
> octavia_api
>
> and as far as I understood octavia_worker, octavia_housekeeping and
> octavia_health_manager have to talk to the amphora instances. But the
> control nodes are spread over three different leafs. So each control
> node in a different L2 domain.
>
> So the question is how to deploy a lb-mgmt-net network in our setup?
>
> - Compute nodes have no "stretched" L2 domain
> - Control nodes, compute nodes and network nodes are in L3 networks like
> api, storage, ...
> - Only network nodes are connected to a L2 domain (with a separated NIC)
> providing the "public" network
>
You'll need to add a new bridge to your compute nodes and create a
provider network associated with that bridge. In my setup this is
simply a flat network tied to a tagged interface. In your case it
probably makes more sense to make a new VNI and create a vxlan
provider network. The routing in your switches should handle the rest.

-Erik
>
> All the best,
> Florian
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-22 Thread Erik McCormick
Oops, dropped Operators. Can't wait until it's all one list...
On Mon, Oct 22, 2018 at 10:44 AM Erik McCormick
 wrote:
>
> On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin  wrote:
> >
> > Hello,
> >
> > I've been having a lot of issues with SSL certificates myself, on my
> > second trip now trying to get it working.
> >
> > Before I spent a lot of time walking through every line in the DevStack
> > plugin and fixing my config options, used the generate
> > script [1] and still it didn't work.
> >
> > When I got the "invalid padding" issue it was because of the DN I used
> > for the CA and the certificate IIRC.
> >
> >  > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING
> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
> > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa
> > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'),
> > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'),
> > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",)
> >  > 19:47 < tobias-urdin> after a quick google "The problem was that my
> > CA DN was the same as the certificate DN."
> >
> > IIRC I think that solved it, but then again I wouldn't remember fully
> > since I've been at so many different angles by now.
> >
> > Here is my IRC logs history from the #openstack-lbaas channel, perhaps
> > it can help you out
> > http://paste.openstack.org/show/732575/
> >
>
> Tobias, I owe you a beer. This was precisely the issue. I'm deploying
> Octavia with kolla-ansible. It only deploys a single CA. After hacking
> the templates and playbook to incorporate a separate server CA, the
> amphorae now load and provision the required namespace. I'm adding a
> kolla tag to the subject of this in hopes that someone might want to
> take on changing this behavior in the project. Hopefully after I get
> through Upstream Institute in Berlin I'll be able to do it myself if
> nobody else wants to do it.
>
> For certificate generation, I extracted the contents of
> octavia_certs_install.yml (which sets up the directory structure,
> openssl.cnf, and the client CA), and octavia_certs.yml (which creates
> the server CA and the client certificate) and mashed them into a
> separate playbook just for this purpose. At the end I get:
>
> ca_01.pem - Client CA Certificate
> ca_01.key - Client CA Key
> ca_server_01.pem - Server CA Certificate
> cakey.pem - Server CA Key
> client.pem - Concatenated Client Key and Certificate
>
> If it would help to have the playbook, I can stick it up on github
> with a huge "This is a hack" disclaimer on it.
>
> > -
> >
> > Sorry for hijacking the thread but I'm stuck as well.
> >
> > I've in the past tried to generate the certificates with [1] but now
> > moved on to using the openstack-ansible way of generating them [2]
> > with some modifications.
> >
> > Right now I'm just getting: Could not connect to instance. Retrying.:
> > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579)
> > from the amphoras, haven't got any further but I've eliminated a lot of
> > stuck in the middle.
> >
> > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there
> > wasn't an issue with CentOS and OpenSSL versions since it tends to lag
> > behind.
> > Checking the amphora with openssl s_client [3] it gives the same one,
> > but the verification is successful just that I don't understand what the
> > bad signature
> > part is about, from browsing some OpenSSL code it seems to be related to
> > RSA signatures somehow.
> >
> > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad
> > signature:s3_clnt.c:2032:
> >
> > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS
> > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm
> > back to something related
> > to the certificates or the communication between the endpoints, or what
> > actually responds inside the amphora (gunicorn IIUC?). Based on the
> > "verify" functions actually causing that bad signature error I would
> > assume it's the generated certificate that the amphora presents that is
> > causing it.
> >
> > I'll have to continue the troubleshooting to the inside of the amphora,
> > I've used the test-only amphora image before but have now built my own
> > one that is
> > using the amphora-agent from the actual stable branch, but same issue
> > (bad signature).
> >
> &

[Openstack-operators] [Octavia] SSL errors polling amphorae and missing tenant network interface

2018-10-19 Thread Erik McCormick
I've been wrestling with getting Octavia up and running and have
become stuck on two issues. I'm hoping someone has run into these
before. My google foo has come up empty.

Issue 1:
When the Octavia controller tries to poll the amphora instance, it
tries repeatedly and eventually fails. The error on the controller
side is:

2018-10-19 14:17:39.181 26 ERROR
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection
retries (currently set to 300) exhausted.  The amphora is unavailable.
Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries
exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by
SSLError(SSLError("bad handshake: Error([('rsa routines',
'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
'tls_process_server_certificate', 'certificate verify
failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112',
port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15
(Caused by SSLError(SSLError("bad handshake: Error([('rsa routines',
'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
'tls_process_server_certificate', 'certificate verify
failed')],)",),))

On the amphora side I see:
[2018-10-19 17:52:54 +] [1331] [DEBUG] Error processing SSL request.
[2018-10-19 17:52:54 +] [1331] [DEBUG] Invalid request from
ip=:::10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake
failure (_ssl.c:1754)

I've generated certificates both with the script in the Octavia git
repo, and with the Openstack Ansible playbook. I can see that they are
present in /etc/octavia/certs.

I'm using the Kolla (Queens) containers for the control plane so I'm
sure I've satisfied all the python library constraints.

Issue 2:
I"m not sure how it gets configured, but the tenant network interface
(ens6) never comes up. I can spawn other instances on that network
with no issue, and I can see that Neutron has the port attached to the
instance. However, in the instance this is all I get:

ubuntu@amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: ens3:  mtu 9000 qdisc pfifo_fast
state UP group default qlen 1000
link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff
inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe30:c460/64 scope link
   valid_lft forever preferred_lft forever
3: ens6:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff

There's no evidence of the interface anywhere else including udev rules.

Any help with either or both issues would be greatly appreciated.

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetups - Call for Hosts

2018-10-16 Thread Erik McCormick
Hello all,

The Ops Meetup team has embarked on a mission to revive the
traditional Operators Meetup that have historically been held between
Summits. With the upcoming merger of the PTG into the Summit week, and
the merger of most Ops discussion sessions at Summits into the Forum,
we felt that we needed to get back to our original format.

With that in mind, we are beginning the process of selecting venues
for both 2019 Meetups. Some guidelines for what is needed to host can
be found here:
https://wiki.openstack.org/wiki/Operations/Meetups#Venue_Selection

Each of the etherpads below contains a template to collect information
about the potential host and venue. If you are interested in hosting a
meetup, simply copy and paste the template into a blank etherpad, fill
it out, and place a link above the template on the original etherpad.

Ops Meetup 2019 #1 - Late February / Early March - Somewhere in Europe
https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2019

Ops Meetup 2019 #2 - Late July / Early August - Somewhere in North America
https://etherpad.openstack.org/p/ops-meetup-venue-discuss-2nd-2019

Reply back to this thread with any questions or comments. If you are
coming to the Berlin Summit, we will be having an Ops Meetup Team
catch-up Forum session. We encourage all of you to join in making
these events a success.

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [SIGS] Ops Tools SIG

2018-10-16 Thread Erik McCormick
On Tue, Oct 16, 2018 at 5:19 AM Miguel Angel Ajo Pelayo
 wrote:
>
> Hi,
>
> Matthias and I talked this morning about this topic, and we came to 
> realize
> that there's room for/would be beneficial to have a common place for:
>
> a) Documentation about second day operator tools which can be
> useful with OpenStack, links to repositories or availability for every 
> distribution.
>
Sounds like a Natural extension to the Ops Guide [1] which we've been
working to return to relevance. I suppose this could also be a wiki
like [2], but we should at least reference it in the guide. In any
event, massive cleanup of old, outdated content really needs to be
undertaken. That should be the other part of the mission I think.

> b) Deployment documentation/config snippets/deployment scripts for those tools
> in integration with OpenStack.
>
> c)  Operator tools and bits which are developed or maintained on OpenStack 
> repos,
> specially the OpenStack related bits of those tools (plugins, etc),
>
We should probably try and revive [3] and make use of that more
effectively to address b and c. We've been trying to encourage
contribution to it for years, but it needs more contributors and some
TLC

> d) Home the organisation of ops-related rooms during OpenStack events, general
>  ones related to OpenStack, and also the distro-specific ones for the 
> distros interested
>  in participation.
>
I'm not exactly sure what you mean by this item. We currently have a
team responsible for meetups and pushing Ops-related content into the
Forum at Summits. Do you propose merging the Ops Meetup Team into this
SIG?
>
> Does this scope for the SIG make sense to everyone willing to participate?
>
>
> Best regards,
> Miguel Ángel.
>
[1] https://docs.openstack.org/operations-guide/
[2] https://wiki.openstack.org/wiki/Operations
[3] https://wiki.openstack.org/wiki/Osops#Code

-Erik

>
> On Mon, Oct 15, 2018 at 11:12 AM Matthias Runge  wrote:
>>
>> On 12/10/2018 14:21, Sean McGinnis wrote:
>> > On Fri, Oct 12, 2018 at 11:25:20AM +0200, Martin Magr wrote:
>> >> Greetings guys,
>> >>
>> >> On Thu, Oct 11, 2018 at 4:19 PM, Miguel Angel Ajo Pelayo <
>> >> majop...@redhat.com> wrote:
>> >>
>> >>> Adding the mailing lists back to your reply, thank you :)
>> >>>
>> >>> I guess that +melvin.hills...@huawei.com  can
>> >>> help us a little bit organizing the SIG,
>> >>> but I guess the first thing would be collecting a list of tools which
>> >>> could be published
>> >>> under the umbrella of the SIG, starting by the ones already in Osops.
>> >>>
>> >>> Publishing documentation for those tools, and the catalog under
>> >>> docs.openstack.org
>> >>> is possibly the next step (or a parallel step).
>> >>>
>> >>>
>> >>> On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister 
>> >>> wrote:
>> >>>
>>  Hi Miguel,
>> 
>>  I would love to join this. What do I need to do?
>> 
>>  Sent from my iPhone
>> 
>>  On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo 
>>  wrote:
>> 
>>  Hello
>> 
>>   Yesterday, during the Oslo meeting we discussed [6] the possibility
>>  of creating a new Special Interest Group [1][2] to provide home and 
>>  release
>>  means for operator related tools [3] [4] [5]
>> 
>> 
>> >>all of those tools have python dependencies related to openstack such 
>> >> as
>> >> python-openstackclient or python-pbr. Which is exactly the reason why we
>> >> moved osops-tools-monitoring-oschecks packaging away from OpsTools SIG to
>> >> Cloud SIG. AFAIR we had some issues of having opstools SIG being dependent
>> >> on openstack SIG. I believe that Cloud SIG is proper home for tools like
>> >> [3][4][5] as they are related to OpenStack anyway. OpsTools SIG contains
>> >> general tools like fluentd, sensu, collectd.
>> >>
>> >>
>> >> Hope this helps,
>> >> Martin
>> >>
>> >
>> > Hey Martin,
>> >
>> > I'm not sure I understand the issue with these tools have dependencies on 
>> > other
>> > packages and the relationship to SIG ownership. Is your concern (or the 
>> > history
>> > of a concern you are pointing out) that the tools would have a more 
>> > difficult
>> > time if they required updates to dependencies if they are owned by a 
>> > different
>> > group?
>> >
>> > Thanks!
>> > Sean
>> >
>>
>> Hello,
>>
>> the mentioned sigs (opstools/cloud) are in CentOS scope and mention
>> repository dependencies. That shouldn't bother us here now.
>>
>>
>> There is already a SIG under the CentOS project, providing tools for
>> operators[7], but also documentation and integrational bits.
>>
>> Also, there is some overlap with other groups and SIGs, such as
>> Barometer[8].
>>
>> Since there is already some duplication, I don't know where it makes
>> sense to have a single group for this purpose?
>>
>> If that hasn't been clear yet, I'd be absolutely interested in
>> joining/helping this effort.
>>
>>
>> Matthias
>>
>>
>>
>> [7] 

Re: [Openstack-operators] Best kernel options for openvswitch on network nodes on a large setup

2018-09-25 Thread Erik McCormick
Ate you getting any particular log messages that lead you to conclude your
issue lies with OVS? I've hit lots of kernel limits under those conditions
before OVS itself ever noticed. Anything in dmesg, journal or neutron logs
of interest?

On Tue, Sep 25, 2018, 7:27 PM Jean-Philippe Méthot <
jp.met...@planethoster.info> wrote:

> Hi,
>
> Are there some recommendations regarding kernel settings configuration for
> openvswitch? We’ve just been hit by what we believe may be an attack of
> some kind we have never seen before and we’re wondering if there’s a way to
> optimize our network nodes kernel for openvswitch operation and thus
> minimize the impact of such an attack, or whatever it was.
>
> Best regards,
>
> Jean-Philippe Méthot
> Openstack system administrator
> Administrateur système Openstack
> PlanetHoster inc.
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Forum Session Brainstorming

2018-09-18 Thread Erik McCormick
This is a friendly reminder for anyone wishing to see Ops-focused sessions
in Berlin to get your submissions in soon. We have a couple things there
that came out of the PTG, but that's it so far. See below for details.

Cheers,
Erik



On Wed, Sep 12, 2018, 5:07 PM Erik McCormick 
wrote:

> Hello everyone,
>
> I have set up an etherpad to collect Ops related session ideas for the
> Forum at the Berlin Summit. Please suggest any topics that you would
> like to see covered, and +1 existing topics you like.
>
> https://etherpad.openstack.org/p/ops-forum-stein
>
> Cheers,
> Erik
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Forum Session Brainstorming

2018-09-12 Thread Erik McCormick
Hello everyone,

I have set up an etherpad to collect Ops related session ideas for the
Forum at the Berlin Summit. Please suggest any topics that you would
like to see covered, and +1 existing topics you like.

https://etherpad.openstack.org/p/ops-forum-stein

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-06 Thread Erik McCormick
On Thu, Sep 6, 2018, 8:40 PM Rochelle Grober 
wrote:

> Sounds like an important discussion to have with the operators in Denver.
> Should put this on the schedule for the Ops meetup.
>
> --Rocky
>

We are planning to attend the upgrade sessions on Monday as a group. How
about we put it there?

-Erik


>
> > -Original Message-
> > From: Matt Riedemann [mailto:mriede...@gmail.com]
> > Sent: Thursday, September 06, 2018 1:59 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > ; openstack-
> > operat...@lists.openstack.org
> > Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-
> > specific news on extraction
> >
> > I wanted to recap some upgrade-specific stuff from today outside of the
> > other [1] technical extraction thread.
> >
> > Chris has a change up for review [2] which prompted the discussion.
> >
> > That change makes placement only work with placement.conf, not
> > nova.conf, but does get a passing tempest run in the devstack patch [3].
> >
> > The main issue here is upgrades. If you think of this like deprecating
> config
> > options, the old config options continue to work for a release and then
> are
> > dropped after a full release (or 3 months across boundaries for CDers)
> [4].
> > Given that, Chris's patch would break the standard deprecation policy.
> Clearly
> > one simple way outside of code to make that work is just copy and rename
> > nova.conf to placement.conf and voila. But that depends on *all*
> > deployment/config tooling to get that right out of the gate.
> >
> > The other obvious thing is the database. The placement repo code as-is
> > today still has the check for whether or not it should use the placement
> > database but falls back to using the nova_api database [5]. So
> technically you
> > could point the extracted placement at the same nova_api database and it
> > should work. However, at some point deployers will clearly need to copy
> the
> > placement-related tables out of the nova_api DB to a new placement DB and
> > make sure the 'migrate_version' table is dropped so that placement DB
> > schema versions can reset to 1.
> >
> > With respect to grenade and making this work in our own upgrade CI
> testing,
> > we have I think two options (which might not be mutually
> > exclusive):
> >
> > 1. Make placement support using nova.conf if placement.conf isn't found
> for
> > Stein with lots of big warnings that it's going away in T. Then Rocky
> nova.conf
> > with the nova_api database configuration just continues to work for
> > placement in Stein. I don't think we then have any grenade changes to
> make,
> > at least in Stein for upgrading *from* Rocky. Assuming fresh devstack
> installs
> > in Stein use placement.conf and a placement-specific database, then
> > upgrades from Stein to T should also be OK with respect to grenade, but
> > likely punts the cut-over issue for all other deployment projects
> (because we
> > don't CI with grenade doing
> > Rocky->Stein->T, or FFU in other words).
> >
> > 2. If placement doesn't support nova.conf in Stein, then grenade will
> require
> > an (exceptional) [6] from-rocky upgrade script which will (a) write out
> > placement.conf fresh and (b) run a DB migration script, likely housed in
> the
> > placement repo, to create the placement database and copy the placement-
> > specific tables out of the nova_api database. Any script like this is
> likely
> > needed regardless of what we do in grenade because deployers will need to
> > eventually do this once placement would drop support for using nova.conf
> (if
> > we went with option 1).
> >
> > That's my attempt at a summary. It's going to be very important that
> > operators and deployment project contributors weigh in here if they have
> > strong preferences either way, and note that we can likely do both
> options
> > above - grenade could do the fresh cutover from rocky to stein but we
> allow
> > running with nova.conf and nova_api DB in placement in stein with plans
> to
> > drop that support in T.
> >
> > [1]
> > http://lists.openstack.org/pipermail/openstack-dev/2018-
> > September/subject.html#134184
> > [2] https://review.openstack.org/#/c/600157/
> > [3] https://review.openstack.org/#/c/600162/
> > [4]
> > https://governance.openstack.org/tc/reference/tags/assert_follows-
> > standard-deprecation.html#requirements
> > [5]
> > https://github.com/openstack/placement/blob/fb7c1909/placement/db_api
> > .py#L27
> > [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-
> > upgrade
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack 

Re: [Openstack-operators] Ops Meetup Agenda Planning - Denver Edition

2018-08-22 Thread Erik McCormick
Good catch Shintaro! Yes, September 10-10 and 11. I must have flipped my
arrival date with the start date. Thanks!

-Erik

On Wed, Aug 22, 2018, 3:06 AM Shintaro Mizuno 
wrote:

> Erik, all,
>
>  > The Ops Meetup will take place September 9th - 10th (Monday and
>  > Tuesday) in a dedicated space at the PTG. You are welcome and
>
> It's 10th - 11th (Monday and Tuesday)  in case someone is planning their
> travel :)
>
> Cheers.
> Shintaro
>
> On 2018/08/22 0:40, Erik McCormick wrote:
> > Hello Ops,
> >
> > As you are hopefully aware, the Ops meetup, now integrated as part of
> > the Project Team Gathering (PTG) is rapidly approaching. We are a bit
> > behind on session planning, and we need your help to create an agenda.
> >
> > Please insert your session ideas into this etherpad, add subtopics to
> > already proposed sessions,  and +1 those that you are interested in.
> > Also please put your name, and maybe some contact info, at the bottom.
> > If you'd be willing to moderate a session, please add yourself to the
> > moderators list.
> >
> > https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018
> >
> > The Ops Meetup will take place September 9th - 10th (Monday and
> > Tuesday) in a dedicated space at the PTG. You are welcome and
> > encouraged to participate in other PTG sessions throughout the rest of
> > the week as well.
> >
> > Also as a reminder, EARLY BIRD PRICING ENDS TOMORROW 8/22 at 11:59pm
> > PDT (06:59 UTC). The price will go from $399 to $599
> >
> > While the price tag may seem a little high to some past Ops Meetup
> > attendees, remember that registration for the PTG includes passes to
> > the next two summits. For you regular summit-goers, that's a good
> > discount. Don't pass it up!
> >
> > Looking forward to seeing lots of new and familiar faces in Denver!
> >
> > Cheers,
> > Erik
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
> --
> Shintaro MIZUNO (水野伸太郎)
> NTT Software Innovation Center
> TEL: 0422-59-4977
> E-mail: mizuno.shint...@lab.ntt.co.jp
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetup Agenda Planning - Denver Edition

2018-08-21 Thread Erik McCormick
Hello Ops,

As you are hopefully aware, the Ops meetup, now integrated as part of
the Project Team Gathering (PTG) is rapidly approaching. We are a bit
behind on session planning, and we need your help to create an agenda.

Please insert your session ideas into this etherpad, add subtopics to
already proposed sessions,  and +1 those that you are interested in.
Also please put your name, and maybe some contact info, at the bottom.
If you'd be willing to moderate a session, please add yourself to the
moderators list.

https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018

The Ops Meetup will take place September 9th - 10th (Monday and
Tuesday) in a dedicated space at the PTG. You are welcome and
encouraged to participate in other PTG sessions throughout the rest of
the week as well.

Also as a reminder, EARLY BIRD PRICING ENDS TOMORROW 8/22 at 11:59pm
PDT (06:59 UTC). The price will go from $399 to $599

While the price tag may seem a little high to some past Ops Meetup
attendees, remember that registration for the PTG includes passes to
the next two summits. For you regular summit-goers, that's a good
discount. Don't pass it up!

Looking forward to seeing lots of new and familiar faces in Denver!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Survey Results!

2018-07-03 Thread Erik McCormick
On Tue, Jul 3, 2018, 8:59 AM Doug Hellmann  wrote:

> Excerpts from Chris Morgan's message of 2018-07-03 07:20:42 -0400:
> > Question 1. "Are you considering attending the OpenStack Project
> Technical
> > Gathering (PTG) in Denver in September?"
> >
> > 83.33% yes
> > 16.67% no
> >
> > (24 respondents)
>
> How does the response rate to the survey compare to attendance at
> recent Ops Meetups?
>
> Doug
>

We've been around 100ish so pretty light

-Erik

>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Proposing no Ops Meetups team meeting this week

2018-05-29 Thread Erik McCormick
On Tue, May 29, 2018, 7:15 AM Chris Morgan  wrote:

> Some of us will be only just returning to work today after being away all
> week last week for the (successful) OpenStack Summit, therefore I propose
> we skip having a meeting today but regroup next week?
>

+1


> Chris
>
> --
> Chris Morgan 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-Erik

>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Multiple Ceph pools for Nova?

2018-05-21 Thread Erik McCormick
Do you have enough hypervisors you can dedicate some to each purpose? You
could make two availability zones each with a different backend.

On Mon, May 21, 2018, 11:52 AM Smith, Eric  wrote:

> I have 2 Ceph pools, one backed by SSDs and one backed by spinning disks
> (Separate roots within the CRUSH hierarchy). I’d like to run all instances
> in a single project / tenant on SSDs and the rest on spinning disks. How
> would I go about setting this up?
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fast Forward Upgrades (FFU) Forum Sessions

2018-05-18 Thread Erik McCormick
Hello all,

There are two forum sessions in Vancouver covering Fast Forward Upgrades.

Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220
Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220

The combined etherpad for both sessions can be found at:
https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades

Please take some time to add in topics you would like to see discussed
or add any other pertinent information. There are several reference
links at the top which are worth reviewing prior to the sessions if
you have the time.

See you all in Vancover!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Help finding old (Mitaka) RDO RPMs

2018-04-12 Thread Erik McCormick
Thanks! You're my heroes :)

On Thu, Apr 12, 2018 at 3:20 PM, Amy Marrich <a...@demarco.com> wrote:
> Erik,
>
> Here's the Mitaka archive:)
>
> http://vault.centos.org/7.3.1611/cloud/x86_64/openstack-mitaka/
>
> Amy (spotz)
>
> On Thu, Apr 12, 2018 at 2:13 PM, Erik McCormick <emccorm...@cirrusseven.com>
> wrote:
>>
>> Hi All,
>>
>> Does anyone happen to have an archive of the MItaka RDO repo lying
>> around they'd be willing to share with a poor unfortunate soul? My
>> clone of it has gone AWOL and I have moderately desperate need of it.
>>
>> Thanks!
>>
>> Cheers,
>> Erik
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Help finding old (Mitaka) RDO RPMs

2018-04-12 Thread Erik McCormick
Hi All,

Does anyone happen to have an archive of the MItaka RDO repo lying
around they'd be willing to share with a poor unfortunate soul? My
clone of it has gone AWOL and I have moderately desperate need of it.

Thanks!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Session Proposals for Vancouver Forum

2018-04-10 Thread Erik McCormick
On Tue, Apr 10, 2018 at 11:19 AM, Jonathan Proulx <j...@csail.mit.edu> wrote:
>
> Thanks for getting this kicked off Erik.  The two things you have up
> to start (fast forward upgrades, and extended maintenance) are the
> exact two things I want out of my trip to YVR.  At least understanding
> current 'state of the art' and helping advance that in the right
> directions best I can.
>

I see Tony Breeds has posted the Extended Maintenance session already
which is awesome. I'm actually considering a Part I and Part II for
FFU as we had a packed house and not nearly enough time in SYD. I'm
just not sure how to structure it or break it down.

> Thanks,
> -Jon
>
> On Tue, Apr 10, 2018 at 11:07:34AM -0400, Erik McCormick wrote:
> :Greetings Ops,
> :
> :We are rapidly approaching the deadline for Forum session proposals
> :(This coming Sunday, 4/15) and we have been rather lax in getting the
> :process started from our side. I've created an etherpad here for
> :everyone to put up session ideas.
> :
> :https://etherpad.openstack.org/p/YYZ-forum-ops-brainstorming
> :
> :Given the late date, please post your session ideas ASAP, and +1 those
> :that you have interest in. Also if you are willing to moderate the
> :session, put your name on it as you'll see on the examples already
> :there.
> :
> :Moderating is easy, gets you a pretty little speaker sticker on your
> :badge, and lets you go get your badge at the Speaker pickup line. It's
> :also a good way to get AUC status and get more involved in the
> :community. It's fairly painless, and only a few of us bite :).
> :
> :I'd like to wrap this up by Friday so we can weed out duplicates from
> :others proposals and get them into the topic submission system with a
> :little time to spare. It would be helpful to have moderators submit
> :their own so everything credits properly. The submission system is at
> :http://forumtopics.openstack.org/. If you don't already have an
> :account and would like to moderate, go set one up.
> :
> :I'm looking forward to seeing lots of you in Vancouver!
> :
> :Cheers,
> :Erik
> :
> :___
> :OpenStack-operators mailing list
> :OpenStack-operators@lists.openstack.org
> :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> --

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Session Proposals for Vancouver Forum

2018-04-10 Thread Erik McCormick
Greetings Ops,

We are rapidly approaching the deadline for Forum session proposals
(This coming Sunday, 4/15) and we have been rather lax in getting the
process started from our side. I've created an etherpad here for
everyone to put up session ideas.

https://etherpad.openstack.org/p/YYZ-forum-ops-brainstorming

Given the late date, please post your session ideas ASAP, and +1 those
that you have interest in. Also if you are willing to moderate the
session, put your name on it as you'll see on the examples already
there.

Moderating is easy, gets you a pretty little speaker sticker on your
badge, and lets you go get your badge at the Speaker pickup line. It's
also a good way to get AUC status and get more involved in the
community. It's fairly painless, and only a few of us bite :).

I'd like to wrap this up by Friday so we can weed out duplicates from
others proposals and get them into the topic submission system with a
little time to spare. It would be helpful to have moderators submit
their own so everything credits properly. The submission system is at
http://forumtopics.openstack.org/. If you don't already have an
account and would like to moderate, go set one up.

I'm looking forward to seeing lots of you in Vancouver!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-04-02 Thread Erik McCormick
I'm a +1 too as long as the devs at large are cool with it and won't hate
on us for crashing their party. I also +1 the proposed format.  It's
basically what we're discussed in Tokyo. Make it so.

Cheers
Erik

PS. Sorry for the radio silence the past couple weeks. Vacation,  kids,
etc.

On Apr 2, 2018 4:18 PM, "Melvin Hillsman"  wrote:

Unless anyone has any objections I believe we have quorum Jimmy.

On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman 
wrote:

> +1
>
> On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur 
> wrote:
>
>> Hi all -
>>
>> I'd like to check in to see if we've come to a consensus on the
>> colocation of the Ops Meetup.  Please let us know as soon as possible as we
>> have to alert our events team.
>>
>> Thanks!
>> Jimmy
>>
>> Chris Morgan 
>> March 27, 2018 at 11:44 AM
>> Hello Everyone,
>>   This proposal looks to have very good backing in the community. There
>> was an informal IRC meeting today with the meetups team, some of the
>> foundation folk and others and everyone seems to like a proposal put
>> forward as a sample definition of the combined event - I certainly do, it
>> looks like we could have a really great combined event in September.
>>
>> I volunteered to share that a bit later today with some other info. In
>> the meanwhile if you have a viewpoint please do chime in here as we'd like
>> to declare this agreed by the community ASAP, so in particular IF YOU
>> OBJECT please speak up by end of week, this week.
>>
>> Thanks!
>>
>> Chris
>>
>>
>>
>>
>> --
>> Chris Morgan 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> Jonathan Proulx 
>> March 23, 2018 at 10:07 AM
>> On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote:
>> :I support the ideas to try colocating the next Ops Midcycle and PTG.
>> :Although scheduling could be a potential challenge but it worth give it a
>> :try.
>> :
>> :Also having an joint social event in the evening can also help Dev/Ops to
>> :meet and offline discussion. :)
>>
>> Agreeing stongly with Matt and Melvin's comments about Forum -vs-
>> PTG/OpsMidcycle
>>
>> PTG/OpsMidcycle (as I see them) are about focusing inside teams to get
>> work done ("how" is a a good one word I think). The advantage of
>> colocation is for cross team questions like "we're thinking of doing
>> this thing this way, does this have any impacts on your work my might
>> not have considered", can get a quick respose in the hall, at lunch,
>> or over beers as Yih Leong suggests.
>>
>> Forum has become about coming to gather across groups for more
>> conceptual "what" discussions.
>>
>> So I also thing they are very distinct and I do see potential benefits
>> to colocation.
>>
>> We do need to watch out for downsides. The concerns around colocation
>> seemed mostly about larger events costing more and being generally
>> harder to organize. If we try we will find out if there is merit to
>> this concern, but (IMO) it is important to keep both of the
>> events as cheap and simple as possible.
>>
>> -Jon
>>
>> :
>> :On Thursday, March 22, 2018, Melvin Hillsman 
>>  wrote:
>> :
>> :> Thierry and Matt both hit the nail on the head in terms of the very
>> :> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my
>> +2
>> :> since I have spoke with both and others outside of this thread and
>> agree
>> :> with them here as I have in individual discussions.
>> :>
>> :> If nothing else I agree with Jimmy's original statement of at least
>> giving
>> :> this a try.
>> :>
>> :> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle
>>  
>> :> wrote:
>> :>
>> :>> Hey folks,
>> :>> Great discussion! There are number of points to comment on going back
>> :>> through the last few emails. I'll try to do so in line with Theirry's
>> :>> latest below. From a User Committee perspective (and as a member of
>> the
>> :>> Ops Meetup planning team), I am a convert to the idea of co-location,
>> but
>> :>> have come to see a lot of value in it. I'll point some of that out as
>> I
>> :>> respond to specific comments, but first a couple of overarching
>> points.
>> :>>
>> :>> In the current model, the Forum sessions are very much about WHAT the
>> :>> software should do. Keeping the discussions focused on behavior,
>> feature
>> :>> and function has made it much easier for an operator to participate
>> :>> effectively in the conversation versus the older, design sessions,
>> that
>> :>> focused largely on blueprints, coding approaches, etc. These are HOW
>> the
>> :>> developers should make things work and, now, are a large part of the
>> focus
>> :>> of the PTG. I realize it's not that cut and dry, but 

Re: [Openstack-operators] [Openstack] HA Guide, no Ubuntu instructions for HA Identity

2018-03-19 Thread Erik McCormick
Looping the list back in since I accidentally dropped it yet again :/

On Mon, Mar 19, 2018 at 8:45 AM, Torin Woltjer
<torin.wolt...@granddial.com> wrote:
> That's good to know, thank you. Out of curiousity, without
> pacemaker/chorosync, does haproxy have the capability to manage a floating
> ip and failover etc?
>

HAProxy can't do that alone. However, using Pacemaker just to manage a
floating IP is like using an aircraft carrier to go fishing. It's best
to use Keepalived (or similar) to do that job. It only does that one
thing, and it does it very well.

> ____
> From: Erik McCormick <emccorm...@cirrusseven.com>
> Sent: 3/16/18 5:22 PM
> To: torin.wolt...@granddial.com
> Subject: Re: [Openstack] HA Guide, no Ubuntu instructions for HA Identity
> There's no good reason to do any of that pacemaker stuff. Just stick haproxy
> in front of 2+ servers running Keystone and move along. This is the case for
> almost all Openstack services.
>
> The main exceptions are the Neutron agents. Just look into L3 HA or DVR for
> that and you should be good.  The guide needs much reworking.
>
> -Erik
>
>
>
> On Mar 16, 2018 11:28 AM, "Torin Woltjer" <torin.wolt...@granddial.com>
> wrote:
>>
>> I'm currently going through the HA guide, setting up openstack HA on
>> ubuntu server. I've gotten to this page,
>> https://docs.openstack.org/ha-guide/controller-ha-identity.html , and there
>> is no instructions for ubuntu. Would I be fine following the instructions
>> for SUSE or is there a different process for setting up HA keystone on
>> Ubuntu?
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openst...@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Tokyo Ops Meetup - Call for Moderators - Last Call for Topics

2018-02-13 Thread Erik McCormick
Hello all,

TL;DR - Those going to the Tokyo Ops meetup, please go volunteer to
moderate sessions and offer any last minute topic ideas at
https://etherpad.openstack.org/p/TYO-ops-meetup-2018

-

The spring Ops meetup in Tokyo is rapidly approaching. Details on the
event can be found here:

https://www.okura-nikko.com/japan/tokyo/hotel-jal-city-tamachi-tokyo/

Registration can be done here:

https://www.eventbrite.com/e/openstack-ops-meetup-tokyo-tickets-39089912982

Most importantly, we need to press forward with solidifying the
content. The planning document for the schedule can be found here:

https://etherpad.openstack.org/p/TYO-ops-meetup-2018

We have many great session ideas, but are very short on moderators
presently. If you plan to attend and are willing to lead a session,
please place your name on the list of moderators starting at Line 149.
If you have a specific session you would like to handle, feel free to
put your name next to it on the list starting at line 22.

We also have room for a few more sessions, especially in the
Enterprise track, so if you would like to propose something, feel free
to add it to the list. We will probably close out new session topics
and start setting the schedule next Tuesday..

Looking forward to seeing lots of you there!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Is there an Ops Meetup today?

2018-02-06 Thread Erik McCormick
 It was moved to 10am EST die to lots of conflicts. Need to update the wiki.

On Feb 6, 2018 9:11 AM, "Jimmy McArthur"  wrote:

> Was it canceled?
>
> https://wiki.openstack.org/wiki/Ops_Meetups_Team
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetups Team meeting minutes and next meeting

2018-01-16 Thread Erik McCormick
Planning for the Spring Ops Meetup in Tokyo (March 6 and 7) continues
to come together nicely. If you plan to join us, please go sign up at:
https://goo.gl/HBJkPy

Also, please help us to fill out the agenda by suggesting topics or
adding a +1 to the ones you like at:
https://etherpad.openstack.org/p/TYO-ops-meetup-2018

Meeting minutes are here:

Meeting ended Tue Jan 16 14:34:38 2018 UTC.
Minutes:  
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-01-16-14.08.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-01-16-14.08.txt
Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-01-16-14.08.log.html

Next meeting will be Tuesday, January 23 at 14:00 UTC.

Thanks,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 4:10 PM, Rochelle Grober
 wrote:
> Folks,
>
> This discussion and the people interested in it seem like a perfect 
> application of the SIG process.  By turning LTS into a SIG, everyone can 
> discuss the issues on the SIG mailing list and the discussion shouldn't end 
> up split.  If it turns into a project, great.  If a solution is found that 
> doesn't need a new project, great.  Even once  there is a decision on how to 
> move forward, there will still be implementation issues and enhancements, so 
> the SIG could very well be long-lived.  But the important aspect of this is:  
> keeping the discussion in a place where both devs and ops can follow the 
> whole thing and act on recommendations.
>
> Food for thought.
>
> --Rocky
>
Just to add more legs to the spider that is this thread: I think the
SIG idea is a good one. It may evolve into a project team some day,
but for now it's a free-for-all polluting 2 mailing lists, and
multiple etherpads. How do we go about creating one?

-Erik

>> -Original Message-
>> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
>> Sent: Tuesday, November 14, 2017 8:31 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> ; openstack-oper. > operat...@lists.openstack.org>
>> Subject: Re: [openstack-dev] Upstream LTS Releases
>>
>> Hi all - please note this conversation has been split variously across -dev 
>> and -
>> operators.
>>
>> One small observation from the discussion so far is that it seems as though
>> there are two issues being discussed under the one banner:
>> 1) maintain old releases for longer
>> 2) do stable releases less frequently
>>
>> It would be interesting to understand if the people who want longer
>> maintenance windows would be helped by #2.
>>
>> On 14 November 2017 at 09:25, Doug Hellmann 
>> wrote:
>> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>> >> >> The concept, in general, is to create a new set of cores from
>> >> >> these groups, and use 3rd party CI to validate patches. There are
>> >> >> lots of details to be worked out yet, but our amazing UC (User
>> >> >> Committee) will be begin working out the details.
>> >> >
>> >> > What is the most worrying is the exact "take over" process. Does it
>> >> > mean that the teams will give away the +2 power to a different
>> >> > team? Or will our (small) stable teams still be responsible for
>> >> > landing changes? If so, will they have to learn how to debug 3rd party 
>> >> > CI
>> jobs?
>> >> >
>> >> > Generally, I'm scared of both overloading the teams and losing the
>> >> > control over quality at the same time :) Probably the final proposal 
>> >> > will
>> clarify it..
>> >>
>> >> The quality of backported fixes is expected to be a direct (and
>> >> only?) interest of those new teams of new cores, coming from users
>> >> and operators and vendors. The more parties to establish their 3rd
>> >> party
>> >
>> > We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> > should not assume that they are needed or will be present. They may
>> > be, but we shouldn't build policy around the assumption that they
>> > will. Why would we have third-party jobs on an old branch that we
>> > don't have on master, for instance?
>> >
>> >> checking jobs, the better proposed changes communicated, which
>> >> directly affects the quality in the end. I also suppose, contributors
>> >> from ops world will likely be only struggling to see things getting
>> >> fixed, and not new features adopted by legacy deployments they're used
>> to maintain.
>> >> So in theory, this works and as a mainstream developer and
>> >> maintainer, you need no to fear of losing control over LTS code :)
>> >>
>> >> Another question is how to not block all on each over, and not push
>> >> contributors away when things are getting awry, jobs failing and
>> >> merging is blocked for a long time, or there is no consensus reached
>> >> in a code review. I propose the LTS policy to enforce CI jobs be
>> >> non-voting, as a first step on that way, and giving every LTS team
>> >> member a core rights maybe? Not sure if that works though.
>> >
>> > I'm not sure what change you're proposing for CI jobs and their voting
>> > status. Do you mean we should make the jobs non-voting as soon as the
>> > branch passes out of the stable support period?
>> >
>> > Regarding the review team, anyone on the review team for a branch that
>> > goes out of stable support will need to have +2 rights in that branch.
>> > Otherwise there's no point in saying that they're maintaining the
>> > branch.
>> >
>> > Doug
>> >
>> >
>> __
>> 
>> >  OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 6:44 PM, John Dickinson  wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
>>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>>> that upgrades are hugely time consuming still.
>>>
>>> If you want to reduce the push for number #2 and help developers get their 
>>> wish of getting features into users hands sooner, the path to upgrade 
>>> really needs to be much less painful.
>>>
>>
>> +1000
>>
>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>> execute the upgrade. (and we skipped a version)
>> Scheduling all the relevant internal teams is a monumental task
>> because we don't have dedicated teams for those projects and they have
>> other priorities.
>> Upgrading affects a LOT of our systems, some we don't fully have
>> control over. And it can takes months to get new deployment on those
>> systems. (and after, we have to test compatibility, of course)
>>
>> So I guess you can understand my frustration when I'm told to upgrade
>> more often and that skipping versions is discouraged/unsupported.
>> At the current pace, I'm just falling behind. I *need* to skip
>> versions to keep up.
>>
>> So for our next upgrades, we plan on skipping even more versions if
>> the database migration allows it. (except for Nova which is a huge
>> PITA to be honest due to CellsV1)
>> I just don't see any other ways to keep up otherwise.
>
> ?!?!
>
> What does it take for this to never happen again? No operator should need to 
> plan and execute an upgrade for a whole year to upgrade one year's worth of 
> code development.
>
> We don't need new policies, new teams, more releases, fewer releases, or 
> anything like that. The goal is NOT "let's have an LTS release". The goal 
> should be "How do we make sure Mattieu and everyone else in the world can 
> actually deploy and use the software we are writing?"
>
> Can we drop the entire LTS discussion for now and focus on "make upgrades 
> take less than a year" instead? After we solve that, let's come back around 
> to LTS versions, if needed. I know there's already some work around that. 
> Let's focus there and not be distracted about the best bureaucracy for not 
> deleting two-year-old branches.
>
>
> --John
>
>
>
> /me puts on asbestos pants
>

OK, let's tone down the flamethrower there a bit Mr. Asbestos Pants
;). The LTS push is not lieu of the quest for simpler upgrades. There
is also an effort to enable fast-forward upgrades going on. However,
this is a non-trivial task that will take many cycles to get to a
point where it's truly what you're looking for. The long term desire
of having LTS releases encompasses being able to hop from one LTS to
the next without stopping over. We just aren't there yet.

However, what we *can* do is make it so when mgagne finally gets to
Newton (or Ocata or wherever) on his next run, the code isn't
completely EOL and it can still receive some important patches. This
can be accomplished in the very near term, and that is what a certain
subset of us are focused on.

We still desire to skip versions. We still desire to have upgrades be
non-disruptive and non-destructive. This is just one step on the way
to that. This discussion has been going on for cycle after cycle with
little more than angst between ops and devs to show for it. This is
the first time we've had progress on this ball of goo that really
matters. Let's all be proactive contributors to the solution.

Those interested in having a say in the policy, put your $0.02 here:
https://etherpad.openstack.org/p/LTS-proposal

Peace, Love, and International Grooviness,
Erik

>>
>> --
>> Mathieu
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 11:30 AM, Blair Bethwaite
 wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>

I would like to hear from people who do *not* want #2 and why not.
What are the benefits of 6 months vs. 1 year? I have heard objections
in the hallway track, but I have struggled to retain the rationale for
more than 10 seconds. I think this may be more of a religious
discussion that could take a while though.

#1 is something we can act on right now with the eventual goal of
being able to skip releases entirely. We are addressing the
maintenance of the old issue right now. As we get farther down the
road of fast-forward upgrade tooling, then we will be able to please
those wishing for a slower upgrade cadence, and those that want to
stay on the bleeding edge simultaneously.

-Erik

> On 14 November 2017 at 09:25, Doug Hellmann  wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
On Nov 8, 2017 1:52 PM, "James E. Blair" <cor...@inaugust.com> wrote:

Erik McCormick <emccorm...@cirrusseven.com> writes:

> On Tue, Nov 7, 2017 at 6:45 PM, James E. Blair <cor...@inaugust.com>
wrote:
>> Erik McCormick <emccorm...@cirrusseven.com> writes:
>>
>>> The concept, in general, is to create a new set of cores from these
>>> groups, and use 3rd party CI to validate patches. There are lots of
>>> details to be worked out yet, but our amazing UC (User Committee) will
>>> be begin working out the details.
>>
>> I regret that due to a conflict I was unable to attend this session.
>> Can you elaborate on why third-party CI would be necessary for this,
>> considering that upstream CI already exists on all active branches?
>
> Lack of infra resources, people are already maintaining their own
> testing for old releases, and distribution of work across
> organizations I think were the chief reasons. Someone else feel free
> to chime in and expand on it.

Which resources are lacking?  I wasn't made aware of a shortage of
upstream CI resources affecting stable branch work, but if there is, I'm
sure we can address it -- this is a very important effort.




It's not a matter of things lacking for today's release cadence and
deprecation policy. That is working fine.  The problems would come if you
had to,  say,  continue to run it for Mitaka until Queens is released.

The upstream CI system is also a collaboratively maintained system with
folks from many organizations participating in it.  Indeed we're now
distributing its maintenance and operation into projects themselves.
It seems like an ideal place for folks from different organizations to
collaborate.


Monty, as well as the Stable Branch cores, were in the room, so perhaps
they can elaborate on this for us.  I'm no expert on what can and cannot be
done.

-Jim
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
On Tue, Nov 7, 2017 at 6:45 PM, James E. Blair <cor...@inaugust.com> wrote:
> Erik McCormick <emccorm...@cirrusseven.com> writes:
>
>> The concept, in general, is to create a new set of cores from these
>> groups, and use 3rd party CI to validate patches. There are lots of
>> details to be worked out yet, but our amazing UC (User Committee) will
>> be begin working out the details.
>
> I regret that due to a conflict I was unable to attend this session.
> Can you elaborate on why third-party CI would be necessary for this,
> considering that upstream CI already exists on all active branches?
>
> Thanks,
>
> Jim

Lack of infra resources, people are already maintaining their own
testing for old releases, and distribution of work across
organizations I think were the chief reasons. Someone else feel free
to chime in and expand on it.

-Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
Hello Ops folks,

This morning at the Sydney Summit we had a very well attended and very
productive session about how to go about keeping a selection of past
releases available and maintained for a longer period of time (LTS).

There was agreement in the room that this could be accomplished by
moving the responsibility for those releases from the Stable Branch
team down to those who are already creating and testing patches for
old releases: The distros, deployers, and operators.

The concept, in general, is to create a new set of cores from these
groups, and use 3rd party CI to validate patches. There are lots of
details to be worked out yet, but our amazing UC (User Committee) will
be begin working out the details.

Please take a look at the Etherpad from the session if you'd like to
see the details. More importantly, if you would like to contribute to
this effort, please add your name to the list starting on line 133.

https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases

Thanks to everyone who participated!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-31 Thread Erik McCormick
The etherpad for the Fast-Forward Upgrades session at the Sydney forum is here:

https://etherpad.openstack.org/p/SYD-forum-fast-forward-upgrades

Please help us flesh it out and frame the discussion to make the best
use of our time. I have included reference materials from previous
sessions to use as a starting point. Thanks to everyone for
participating!

Cheers,
Erik

On Mon, Oct 30, 2017 at 11:25 PM,  <arkady.kanev...@dell.com> wrote:
> See you there Eric.
>
>
>
> From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
> Sent: Monday, October 30, 2017 10:58 AM
> To: Matt Riedemann <mriede...@gmail.com>
> Cc: OpenStack Development Mailing List <openstack-...@lists.openstack.org>;
> openstack-operators <openstack-operators@lists.openstack.org>
> Subject: Re: [openstack-dev] [Openstack-operators]
> [skip-level-upgrades][fast-forward-upgrades] PTG summary
>
>
>
>
>
>
>
> On Oct 30, 2017 11:53 AM, "Matt Riedemann" <mriede...@gmail.com> wrote:
>
> On 9/20/2017 9:42 AM, arkady.kanev...@dell.com wrote:
>
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
>
>
>
> Arkady,
>
> Are you actually moderating the forum session in Sydney because the session
> says Eric McCormick is the session moderator:
>
>
>
> I submitted it so it gets my name on it. I think Arkady and I are going to
> do it together.
>
>
>
> https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20451/fast-forward-upgrades
>
> People are asking in the nova IRC channel about this session and were told
> to ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't
> involved in fast-forward upgrades, as far as I know anyway.
>
> So whoever is moderating this session, can you please create an etherpad and
> get it linked to the wiki?
>
> https://wiki.openstack.org/wiki/Forum/Sydney2017
>
>
>
> I'll have the etherpad up today and pass it among here and on the wiki.
>
>
>
>
>
> --
>
> Thanks,
>
> Matt
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-30 Thread Erik McCormick
On Oct 30, 2017 11:53 AM, "Matt Riedemann"  wrote:

On 9/20/2017 9:42 AM, arkady.kanev...@dell.com wrote:

> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
>

Arkady,

Are you actually moderating the forum session in Sydney because the session
says Eric McCormick is the session moderator:


I submitted it so it gets my name on it. I think Arkady and I are going to
do it together.

https://www.openstack.org/summit/sydney-2017/summit-schedule
/events/20451/fast-forward-upgrades

People are asking in the nova IRC channel about this session and were told
to ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't
involved in fast-forward upgrades, as far as I know anyway.

So whoever is moderating this session, can you please create an etherpad
and get it linked to the wiki?

https://wiki.openstack.org/wiki/Forum/Sydney2017


I'll have the etherpad up today and pass it among here and on the wiki.



-- 

Thanks,

Matt


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] It's time...

2017-10-04 Thread Erik McCormick
Tom,

Thank you for all you've done herding cats around here. We wouldn't be
where we are today without you. We better find a very large pub in
Sydney. See you there!

All the best on your future sunbathing endeavors :)

Cheers,
Erik

On Wed, Oct 4, 2017 at 10:20 AM, Tom Fifield  wrote:
> Hi all,
>
> Tom here, on a personal note.
>
> It's quite fitting that this November our summit is in Australia :)
>
> I'm hoping to see you there because after being part of 15 releases, and
> travelling the equivalent of a couple of round trips to the moon to witness
> OpenStack grow around the world, the timing is right for me to step down as
> your Community Manager.
>
> We've had an incredible journey together, culminating in the healthy
> community we have today. Across more than 160 countries, users and
> developers collaborate to make clouds better for the work that matters. The
> diversity of use is staggering, and the scale of resources being run is
> quite significant. We did that :)
>
>
> Behind the scenes, I've spent the past couple of months preparing to
> transition various tasks to other members of the Foundation staff. If you
> see a new name behind an openstack.org email address, please give them due
> attention and care - they're all great people. I'll be around through to
> year end to shepherd the process, so please ping me if you are worried about
> anything.
>
> Always remember, you are what makes OpenStack. OpenStack changes and thrives
> based on how you feel and what work you do. It's been a privilege to share
> the journey with you.
>
>
>
> So, my plan? After a decade of diligent effort in organisations
> euphemistically described as "minimally-staffed", I'm looking forward to
> taking a decent holiday. Though, if you have a challenge interesting enough
> to wrest someone from a tropical beach or a misty mountain top ... ;)
>
>
> There are a lot of you out there to whom I remain indebted. Stay in touch to
> make sure your owed drinks make it to you!
>
> +886 988 33 1200
> t...@tomfifield.net
> https://www.linkedin.com/in/tomfifield
> https://twitter.com/TomFifield
>
>
> Regards,
>
>
>
> Tom
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] issue using magnum on Pike

2017-09-29 Thread Erik McCormick
The current release of Magnum is 5.0.1. You seem to be running a later dev
release. Perhaps sine regression got introduced in that build?

-Erik


On Sep 29, 2017 8:59 AM, "Andy Wojnarek" 
wrote:

So I started a fresh install of Pike on OpenSuSE in my test lab at work,
and I’m having a hard time getting Magnum to work. I’m getting this error
on Cluster Create:



http://paste.openstack.org/show/622304/



(AttributeError: 'module' object has no attribute 'APIClient')





*I’m running OpenSuSE 42.2, here are my magnum packages:*

gvicopnstk01:~ # rpm -qa | grep -i magnum

openstack-magnum-api-5.0.2~dev8-1.2.noarch

python-magnum-5.0.2~dev8-1.2.noarch

openstack-magnum-5.0.2~dev8-1.2.noarch

openstack-magnum-conductor-5.0.2~dev8-1.2.noarch

python-magnumclient-2.6.0-1.11.noarch





*Command I’m running to create the cluster:*

gvicopnstk01:~ # magnum cluster-create --name k8s-cluster
 --cluster-template k8s-cluster-template   --master-count 1   --node-count 1





*The Template I’m using:*

gvicopnstk01:~ # magnum cluster-template-show 6fa514c1-f598-46b1-8bba-
6c7c728094bc

+---+--+

| Property  | Value|

+---+--+

| insecure_registry | -|

| labels| {}   |

| updated_at| -|

| floating_ip_enabled   | True |

| fixed_subnet  | -|

| master_flavor_id  | m1.small |

| uuid  | 6fa514c1-f598-46b1-8bba-6c7c728094bc |

| no_proxy  | -|

| https_proxy   | -|

| tls_disabled  | False|

| keypair_id| AW   |

| public| False|

| http_proxy| -|

| docker_volume_size| -|

| server_type   | vm   |

| external_network_id   | provider |

| cluster_distro| fedora-atomic|

| image_id  | fedora-atomic-ocata  |

| volume_driver | -|

| registry_enabled  | False|

| docker_storage_driver | devicemapper |

| apiserver_port| -|

| name  | k8s-cluster-template |

| created_at| 2017-09-28T19:25:58+00:00|

| network_driver| flannel  |

| fixed_network | -|

| coe   | kubernetes   |

| flavor_id | m1.small |

| master_lb_enabled | False|

| dns_nameserver| 192.168.240.150  |







(The image name is Ocata because I downloaded the Ocata image, I figured it
was fine)



The Error I’m getting I cannot find anything about it on Google. Any got
any ideas the right direction I should go?



Thanks,

Andrew Wojnarek |  Sr. Systems Engineer| ATS Group, LLC

mobile 717.856.6901 <(717)%20856-6901> | andy.wojna...@theatsgroup.com

*Galileo Performance Explorer Blog* * Offers
Deep Insights for Server/Storage Systems*

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ceph Session at the Forum

2017-09-28 Thread Erik McCormick
Hey Ops folks,

A Ceph session was put on the discussion Etherpad for the forum, and I
know a lot of folks have expressed interest in doing one, especially
since there's no Ceph Day going on this time around.

I need a volunteer to run the session and set up an agenda. If you're
willing and able to do it, you can either submit the session yourself
at http://forumtopics.openstack.org/ or let me know and I'll be happy
to add it.

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-26 Thread Erik McCormick
My main question here would be this: If you feel there are deficiencies in
Ironic, why not contribute to improving Ironic rather than spawning a whole
new project?

I am happy to take a look at it, and I'm by no means trying to contradict
your assumptions here. I just get concerned with the overhead and confusion
that comes with competing projects.

Also, if you'd like to discuss this in detail with a room full of bodies, I
suggest proposing a session for the Forum in Sydney. If some of the
contributors will be there, it would be a good opportunity for you to get
feedback.

Cheers,
Erik


On Sep 26, 2017 8:41 PM, "Matt Riedemann"  wrote:

> On 9/25/2017 6:27 AM, Zhenguo Niu wrote:
>
>> Hi folks,
>>
>> First of all, thanks for the audiences for Mogan project update in the TC
>> room during Denver PTG. Here we would like to get more suggestions before
>> we apply for inclusion.
>>
>> Speaking only for myself, I find the current direction of one
>> API+scheduler for vm/baremetal/container unfortunate. After containers
>> management moved out to be a separated project Zun, baremetal with Nova and
>> Ironic continues to be a pain point.
>>
>> #. API
>> Only part of the Nova APIs and parameters can apply to baremetal
>> instances, meanwhile for interoperable with other virtual drivers, bare
>> metal specific APIs such as deploy time RAID, advanced partitions can not
>>  be included. It's true that we can support various compute drivers, but
>> the reality is that the support of each of hypervisor is not equal,
>> especially for bare metals in a virtualization world. But I understand the
>> problems with that as Nova was designed to provide compute
>> resources(virtual machines) instead of bare metals.
>>
>> #. Scheduler
>> Bare metal doesn't fit in to the model of 1:1 nova-compute to resource,
>> as nova-compute processes can't be run on the inventory nodes themselves.
>> That is to say host aggregates, availability zones and such things based on
>> compute service(host) can't be applied to bare metal resources. And for
>> grouping like anti-affinity, the granularity is also not same with virtual
>> machines, bare metal users may want their HA instances not on the same
>> failure domain instead of the node itself. Short saying, we can only get a
>> rigid resource class only scheduling for bare metals.
>>
>>
>> And most of the cloud providers in the market offering virtual machines
>> and bare metals as separated resources, but unfortunately, it's hard to
>> achieve this with one compute service. I heard people are deploying
>> seperated Nova for virtual machines and bare metals with many downstream
>> hacks to the bare metal single-driver Nova but as the changes to Nova would
>> be massive and may invasive to virtual machines, it seems not practical to
>> be upstream.
>>
>> So we created Mogan [1] about one year ago, which aims to offer bare
>> metals as first class resources to users with a set of bare metal specific
>> API and a baremetal-centric scheduler(with Placement service). It was like
>> an experimental project at the beginning, but the outcome makes us believe
>> it's the right way. Mogan will fully embrace Ironic for bare metal
>> provisioning and with RSD server [2] introduced to OpenStack, it will be a
>> new world for bare metals, as with that we can compose hardware resources
>> on the fly.
>>
>> Also, I would like to clarify the overlaps between Mogan and Nova, I bet
>> there must be some users who wants to use one API for the compute resources
>> management as they don't care about whether it's a virtual machine or a
>> bare metal server. Baremetal driver with Nova is still the right choice for
>> such users to get raw performance compute resources. On the contrary, Mogan
>> is for real bare metal users and cloud providers who wants to offer bare
>> metals as a separated resources.
>>
>> Thank you for your time!
>>
>>
>> [1] https://wiki.openstack.org/wiki/Mogan
>> [2] https://www.intel.com/content/www/us/en/architecture-and-tec
>> hnology/rack-scale-design-overview.html
>>
>> --
>> Best Regards,
>> Zhenguo Niu
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Cross-posting to the operators list since they are the community that
> you'll likely need to convince the most about Mogan and whether or not they
> want to start experimenting with it.
>
> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

Re: [Openstack-operators] Sydney Forum Topics

2017-09-26 Thread Erik McCormick
Sorry, clipboard fail. Thanks!

On Sep 26, 2017 11:54 AM, "Jimmy McArthur" <ji...@openstack.org> wrote:

> I think you want this page, actualy: http://forumtopics.openstack.org/
>
> Erik McCormick <emccorm...@cirrusseven.com>
> September 26, 2017 at 9:53 AM
> Hey Ops folks,
>
> We are in the process of submitting sessions the Forum to the
> foundation tool. You can see what is in so far here:
>
> http://forumtopics.openstack.org/cfp/
>
> I wanted to give everyone one last chance to go to
> https://etherpad.openstack.org/p/SYD-ops-session-ideas and add session
> ideas, +1, or comment on existing proposals.
>
> Looking forward to seeing a lot of you in Sydney!
>
> Thanks,
> Erik
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Sydney Forum Topics

2017-09-26 Thread Erik McCormick
Hey Ops folks,

We are in the process of submitting sessions the Forum to the
foundation tool. You can see what is in so far here:

http://forumtopics.openstack.org/cfp/

I wanted to give everyone one last chance to go to
https://etherpad.openstack.org/p/SYD-ops-session-ideas and add session
ideas, +1, or comment on existing proposals.

Looking forward to seeing a lot of you in Sydney!

Thanks,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Forum Brainstorming

2017-09-05 Thread Erik McCormick
Hello Ops!

As a followup to this, the Ops Meetup Team has set up a brainstorming
etherpad to discuss possible forum sessions for Sydney.

https://etherpad.openstack.org/p/SYD-ops-session-ideas

This works the same as our Ops Mid-Cycle meetups. Post your session
ideas, comment on other listed proposals, and +1 the ones you are
interested in seeing included. As always, please share your ideas even
if you will be unable to attend the Sydney Summit.

Cheers,
Erik

On Mon, Sep 4, 2017 at 2:05 PM, Jimmy McArthur  wrote:
> Hi all,
>
> Welcome to the topic selection process for our Forum in Sydney. If you've
> participated in an ops meetup before, this should seem pretty comfortable.
> If not, note that this is not a classic conference track with speakers and
> presentations. OpenStack community members (participants  in development
> teams, working groups, and other interested individuals) discuss the topics
> they want to cover and get alignment on and we welcome your participation.
>
> The Forum is for the entire community to come together, to create a neutral
> space rather than having separate “ops” and “dev” days. Sydney marks the
> start of the Rocky release cycle, where ideas and requirements will be
> gathered. Users should aim to come  armed with feedback from Augusts's Pike
> release if at all possible. We aim to ensure the broadest coverage of topics
> that will allow for multiple  parts of the community getting together to
> discuss key areas within our community/projects.
>
> There are two stages to the brainstorming:
>
> Starting today, set up an etherpad with your group/team, or use one on the
> list and start discussing ideas you'd like to talk about at the Forum. Then,
> through mailing list discussion work out which ones are the most needed -
> just like you did prior to the ops events.
> Then, in a couple of weeks, we will open up a more formal web-based tool for
> submission of abstracts that came out of the brainstorming on top.
>
>
> Make an etherpad, or use one from the list at:
> https://wiki.openstack.org/wiki/Forum/Sydney2017
>
> One key thing we'd like to see is collaboration between every area of ther
> community. Find an interested development project or three and share to your
> ideas.
>
> Think about what kind of session ideas might end up as: Project-specific,
> cross-project or strategic/whole-of-community discussions. There'll be more
> slots for the latter two, so do try and think outside the box!
>
> This part of the process is where we gather broad community consensus - in
> theory the second part is just about fitting in as many of the good ideas
> into the schedule as we can.
>
> Further details about the forum can be found at:
> https://wiki.openstack.org/wiki/Forum
>
> Thanks!
> Jimmy McArthur on behalf of the OpenStack Foundation and User Committee
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Erik McCormick
On Jul 28, 2017 8:51 AM, "John Petrini"  wrote:

Hi Saverio,

Thanks for the info. The parameter is missing completely:


  
  
  
  
  


I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled. Do you know if this feature is available in
Mitaka?

It was merged 2 years ago so should have been there since Liberty.


John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]


751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
<(215)%20297-4401>   //   *E: *jpetr...@coredial.com


On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:

> Hello John,
>
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
>
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
>
> 
>   
>   
>   
>   
>   
>   
>function='0x0'/>
> 
>
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
>
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
>
> Cheers,
>
> Saverio
>
>
>
> 2017-07-27 17:49 GMT+02:00 John Petrini :
> > Hi List,
> >
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> bare
> > metal resolves the issue completely.
> >
> > These instances are running asterisk which should be pulling these
> packets
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> >
> > We can't seem to pin down a reason why we would see this behavior in KVM
> but
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> >
> > Thank You,
> >
> > John
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops and BoF sessions at the Summit / Forum

2017-07-12 Thread Erik McCormick
I am personally in favor of many Ceph sessions. As you suggested
below, you should pitch your sessions when the forum session process
starts. I'm sure you'll receive plenty of traffic. If you're
interested in coming to Mexico City August 9 - 10, or would like us to
discuss some Ceph-related topics as a group for you to follow up on
later, you could also pitch a session or two for the Ops Mid-Cycle
meetup here:

https://etherpad.openstack.org/p/MEX-ops-meetup

Cheers,
Erik

On Wed, Jul 12, 2017 at 8:51 PM, Blair Bethwaite
 wrote:
> This didn't get a reply yet on the UC list so I thought maybe adding
> OS-ops would get some attention...
>
> On 10 July 2017 at 22:32, Blair Bethwaite  wrote:
>> Hi all,
>>
>> We use Ceph along with our OpenStack. At Summits gone by I've
>> typically participated in very well attended Ceph ops sessions. With
>> CephFS having been dubbed production-ready for over 6 months now, some
>> colleagues suggested proposing a CephFS BoF for OpenStack Sydney,
>> hence the message...
>>
>> In Boston (coinciding with the beginning of the Forum) there wasn't a
>> Ceph ops session. There was however an entire Ceph Day co-located
>> (which was great by the way - kudos to whoever came up with the
>> OpenStack Summit + Open Source Days idea).
>>
>> Still, a few regulars were sad to miss out on the real-time mailing
>> list experience you get in dedicated users/ops sessions like these. By
>> the time we realised there wasn't one on the schedule in was a bit
>> late (and unfortunately we couldn't use the free spots on the Thursday
>> as that would have clashed with the Ceph Day programme). Arguably the
>> organisers of the Ceph Day could/should have included a general
>> user/ops BoF/feedback session within that sub-programme - it looks
>> like there won't be any co-located Open Source Days in Sydney though,
>> so that point is somewhat moot this time around.
>>
>> The purpose of this message is to figure out where this sort of stuff
>> belongs now in the new Summit format? It naturally seems to fit in the
>> Forum, but I think the way the Forum brainstorming was posed for
>> Boston and the fact it was the first go-around for the Forum meant
>> that no-one thought to propose these sorts of originally Ops-Summit
>> things.
>>
>> Am I right to wait for the Sydney Forum process to kick off or should
>> we be submitting something into the main call for presentations now?
>>
>> Cheers,
>>
>> --
>> Blair Bethwaite
>> Senior HPC Consultant
>>
>> Monash eResearch Centre
>> Monash University
>> Room G26, 15 Innovation Walk, Clayton Campus
>> Clayton VIC 3800
>> Australia
>> Mobile: 0439-545-002
>> Office: +61 3-9903-2800
>
>
>
> --
> Blair Bethwaite
> Senior HPC Consultant
>
> Monash eResearch Centre
> Monash University
> Room G26, 15 Innovation Walk, Clayton Campus
> Clayton VIC 3800
> Australia
> Mobile: 0439-545-002
> Office: +61 3-9903-2800
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Erik McCormick
On Tue, Jun 6, 2017 at 4:44 PM, Lance Bragstad  wrote:
>
>
> On Tue, Jun 6, 2017 at 3:06 PM, Marc Heckmann 
> wrote:
>>
>> Hi,
>>
>> On Tue, 2017-06-06 at 10:09 -0500, Lance Bragstad wrote:
>>
>> Also, with all the people involved with this thread, I'm curious what the
>> best way is to get consensus. If I've tallied the responses properly, we
>> have 5 in favor of option #2 and 1 in favor of option #3. This week is spec
>> freeze for keystone, so I see a slim chance of this getting committed to
>> Pike [0]. If we do have spare cycles across the team we could start working
>> on an early version and get eyes on it. If we straighten out everyone
>> concerns early we could land option #2 early in Queens.
>>
>>
>> I was the only one in favour of option 3 only because I've spent a bunch
>> of time playing with option #1 in the past. As I mentioned previously in the
>> thread, if #2 is more in line with where the project is going, then I'm all
>> for it. At this point, the admin scope issue has been around long enough
>> that Queens doesn't seem that far off.
>
>
> From an administrative point-of-view, would you consider option #1 or option
> #2 to better long term?
>

Count me as another +1 for option 2. It's the right way to go long
term, and we've lived with how it is now long enough that I'm OK
waiting a release or even 2 more for it with things as is. I think
option 3 would just muddy the waters.

-Erik

>>
>>
>> -m
>>
>>
>> I guess it comes down to how fast folks want it.
>>
>> [0] https://review.openstack.org/#/c/464763/
>>
>> On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad 
>> wrote:
>>
>> I replied to John, but directly. I'm sending the responses I sent to him
>> but with the intended audience on the thread. Sorry for not catching that
>> earlier.
>>
>>
>> On Fri, May 26, 2017 at 2:44 AM, John Garbutt 
>> wrote:
>>
>> +1 on not forcing Operators to transition to something new twice, even if
>> we did go for option 3.
>>
>>
>> The more I think about this, the more it worries me from a developer
>> perspective. If we ended up going with option 3, then we'd be supporting
>> both methods of elevating privileges. That means two paths for doing the
>> same thing in keystone. It also means oslo.context, keystonemiddleware, or
>> any other library consuming tokens that needs to understand elevated
>> privileges needs to understand both approaches.
>>
>>
>>
>> Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
>> of the options) We spoke about fallback rules you pass but with a warning to
>> give us a smoother transition. I think that's my main objection with the
>> existing patches, having to tell all admins to get their token for a
>> different project, and give them roles in that project, all before being
>> able to upgrade.
>>
>>
>> Thanks for bringing up the upgrade case! You've kinda described an upgrade
>> for option 1. This is what I was thinking for option 2:
>>
>> - deployment upgrades to a release that supports global role assignments
>> - operator creates a set of global roles (i.e. global_admin)
>> - operator grants global roles to various people that need it (i.e. all
>> admins)
>> - operator informs admins to create globally scoped tokens
>> - operator rolls out necessary policy changes
>>
>> If I'm thinking about this properly, nothing would change at the
>> project-scope level for existing users (who don't need a global role
>> assignment). I'm hoping someone can help firm ^ that up or improve it if
>> needed.
>>
>>
>>
>> Thanks,
>> johnthetubaguy
>>
>> On Fri, 26 May 2017 at 08:09, Belmiro Moreira
>>  wrote:
>>
>> Hi,
>> thanks for bringing this into discussion in the Operators list.
>>
>> Option 1 and 2 and not complementary but complety different.
>> So, considering "Option 2" and the goal to target it for Queens I would
>> prefer not going into a migration path in
>> Pike and then again in Queens.
>>
>> Belmiro
>>
>> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>>
>> I think a option 2 is better.
>>
>> Best Regards
>> Chaoyi Huang (joehuang)
>> 
>> From: Lance Bragstad [lbrags...@gmail.com]
>> Sent: 25 May 2017 3:47
>> To: OpenStack Development Mailing List (not for usage questions);
>> openstack-operators@lists.openstack.org
>> Subject: Re: [openstack-dev]
>> [keystone][nova][cinder][glance][neutron][horizon][policy] defining
>> admin-ness
>>
>> I'd like to fill in a little more context here. I see three options with
>> the current two proposals.
>>
>> Option 1
>>
>> Use a special admin project to denote elevated privileges. For those
>> unfamiliar with the approach, it would rely on every deployment having an
>> "admin" project defined in configuration [0].
>>
>> How it works:
>>
>> Role assignments on this project represent global scope which is denoted
>> by a boolean attribute 

[Openstack-operators] Ops Mid-Cycle Meetup - MEX - Session Planning

2017-05-30 Thread Erik McCormick
Hello Ops,

We have begun planning the details for our next mid-cycle meetup in
Mexico City, August 9 and 10th. We need help from all of you to come
up with session ideas and to provide feedback on the proposals of
others. Even if you are not planning to attend, your feedback would be
appreciated. You can always catch up via the etherpads afterwards.

The planning etherpad can be found here:

https://etherpad.openstack.org/p/MEX-ops-meetup

Add your session ideas to the "Session Proposals" section at the
bottom or comment and +1 sessions that are already there. The most
popular sessions will get included in the final schedule, so make sure
to vote for any that interest you.

Thanks!

-Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cannot launch instances on Ocata.

2017-05-17 Thread Erik McCormick
You'll want to check the nova-scheduler.log (controller) and the
nova-compute.log (compute). You can look for your request ID and then
go forward from there. Those should shed some more light on what the
issue is

-Erik

On Wed, May 17, 2017 at 5:09 PM, Andy Wojnarek
 wrote:
> Hi,
>
>
>
> I have a new Openstack cloud running in our lab, but I am unable to launch
> instances. This is Ocata running on Ubuntu 16.04.2
>
>
>
> Here are the errors I am getting when trying to launch an instance:
>
>
>
> On my controller node in log file /var/log/nova/nova-conductor.log
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> [req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db
> 4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to schedule instances
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
> recent call last):
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 866, in
> schedule_and_build_instances
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> request_specs[0].to_legacy_filter_properties_dict())
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 597, in
> _schedule_instances
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager hosts =
> self.scheduler_client.select_destinations(context, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 371, in
> wrapped
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return
> func(*args, **kwargs)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
> 51, in select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return
> self.queryclient.select_destinations(context, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line
> 37, in __run_method
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return
> getattr(self.instance, __name)(*args, **kwargs)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 32,
> in select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return
> self.scheduler_rpcapi.select_destinations(context, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 129, in
> select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return
> cctxt.call(ctxt, 'select_destinations', **msg_args)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 169,
> in call
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> retry=self.retry)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 97, in
> _send
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> timeout=timeout, retry=retry)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 458, in send
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager retry=retry)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
> line 449, in _send
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager raise result
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
> NoValidHost_Remote: No valid host was found. There are not enough hosts
> available.
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most
> recent call last):
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 218,
> in inner
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return
> func(*args, **kwargs)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 98, in
> select_destinations
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager dests =
> self.driver.select_destinations(ctxt, spec_obj)
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
>
> 2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File
> 

[Openstack-operators] [openstack-dev] Affected by OSIC, Layoffs? Or want to help?

2017-04-26 Thread Erik McCormick
I'm floating this dev thread over to ops as I imagine recent layoffs
could have affected some of you folks also, and ops are people too!

http://lists.openstack.org/pipermail/openstack-dev/2017-April/115812.html

Short version is, if you were planning to, or would like to attend the
Openstack Summit in Boston, there is some travel assistance money left
over. There are lots of companies hiring and we like to keep our
community meaningfully and gainfully engaged.

There were also offers on that thread of donated frequent flier miles
to help get people there. I'll put myself out there as willing to
donate 1 domestic US round trip, or one international one-way ticket
to a needy Stacker.

If you would like to take advantage of the offer, send an email to
Lauren Sell (lau...@openstack.org) ASAP. The hotel block gets turned
in on Friday.

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] openstack operators meetups team meeting 2017-4-11

2017-04-11 Thread Erik McCormick
Sorry for slacking off. I'm out on vacation this week. I'll be there next
week for sure!

Cheers,
Erik

On Apr 11, 2017 11:55 AM, "Chris Morgan"  wrote:

> Today's meeting was very thinly attended (minutes and log below). I would
> like to encourage as many as possible openstack operators (particularly
> those on the meetups team) to make next meeting (2017-4-18 at 15:00 UTC) or
> failing that let it be know if this time slot is no longer working.
>
> Next week I am going to propose we vote on accepting the proposal for the
> next mid-cycle meeting to be held in Mexico (details are here
> https://docs.google.com/document/d/1NdMCOTPP_
> ZmeF2Ak1mQB1bCOFHDkA5P2l6n6kdb8Kls/edit#)
>
> Also we need to make some progress on the arrangements for the upcoming
> Boston Forum at the openstack summit, for example drumming up some more
> moderators. I'm going!
>
> Cheers
>
> Chris
>
> Minutes:
> Meeting ended Tue Apr 11 15:27:03 2017 UTC. Information about MeetBot at
> http://wiki.debian.org/MeetBot . (v 0.1.4)
> 11:27 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/
> 2017/ops_meetup_team.2017-04-11-15.04.html
> 11:27 AM O Minutes (text): http://eavesdrop.openstack.
> org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-04-11-15.04.txt
> 11:27 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/
> 2017/ops_meetup_team.2017-04-11-15.04.log.html
>
>
> --
> Chris Morgan 
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Forum in Boston - make some noise!

2017-03-09 Thread Erik McCormick
Is there any possible way to push this deadline out a week? I ask only
because my brain hurts focusing on sessions for the midcycle, and things
will almost certainly come up in Milan that will spawn ideas for the forum.

I know everything is on a tight schedule, but just wanted to throw this out
there.

Cheers,
Erik


On Mar 9, 2017 9:51 PM, "Tom Fifield"  wrote:

> Hi all,
>
> The end of brainstorming for our first Forum in Boston is fast
> approaching, and we need your help.
>
> We don't have even close to enough topic suggestions on our etherpad to
> make the Forum a success :) We're also missing your +1 vote to show which
> sessions you'd like to see.
>
>
>
>
> ==> Please take 3 minutes to list any idea you want to discuss together as
> developers and users, and +1/-1 sessions to show your interest:
>
>
> https://etherpad.openstack.org/p/BOS-UC-brainstorming
>
>
>
> We will close the etherpad on Tuesday, so time is tight!
>
>
> Once you've done that, you might like to share your ideas on the mailing
> list ...
>
>
>
>
> PS - If you want to learn more about the forum, check out:
> * http://superuser.openstack.org/articles/openstack-forum/
> * https://wiki.openstack.org/wiki/Forum
>
>
>
>
> Regards,
>
>
> Tom
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [baremetal][ironic] Ops midcycle Bare Metal session

2017-03-07 Thread Erik McCormick
Hello Ops,

I have set up a skeleton etherpad for the Bare Metal session at the
midcycle meetup in Milan. Please take a few minutes and add any topics
you wish to discuss, or expand on anything that is already there. We
welcome submissions from anyone, even if you're not planning to attend
the event.

Don't worry about making it pretty. You can just throw stuff in there
and I'll organize it before the session.

https://etherpad.openstack.org/p/MIL-ops-baremetal

Thanks in advance for all your help. See you in Milan!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Operators Mid-Cycle Meetup Session Planning

2017-02-14 Thread Erik McCormick
Hello everyone,

Session planning for Milan is in full swing. We have now frozen new
session submissions, but we could still use some input on which
sessions are most important to you. Please take a few minutes and look
over the submissions and add +1s to the ones that you would like to
see included.

https://etherpad.openstack.org/p/MIL-ops-meetup

Additionally, we could use a couple more moderators. If you are
willing to moderate a session, please add your name to the list below
the sessions.

Thanks,
Erik


On Tue, Feb 7, 2017 at 11:20 AM, Erik McCormick
<emccorm...@cirrusseven.com> wrote:
> Hello everyone,
>
> If you would like to have input into what sessions we have during the
> Milan meetup in March, now is the time. Please head over to
> https://etherpad.openstack.org/p/MIL-ops-meetup and either add new
> session suggestions, or +1 those already on the list.
>
> We will be closing submissions on Monday, February 13 so that we can
> begin working on the schedule.
>
> Additionally, if you are willing to moderate a session, please add
> your name to the list of volunteers below the schedule.
>
> See you in Milan!
>
> Cheers,
> Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Operators Mid-Cycle Meetup Session Planning

2017-02-07 Thread Erik McCormick
Hello everyone,

If you would like to have input into what sessions we have during the
Milan meetup in March, now is the time. Please head over to
https://etherpad.openstack.org/p/MIL-ops-meetup and either add new
session suggestions, or +1 those already on the list.

We will be closing submissions on Monday, February 13 so that we can
begin working on the schedule.

Additionally, if you are willing to moderate a session, please add
your name to the list of volunteers below the schedule.

See you in Milan!

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] CentOS 7.3 libvirt appears to be broken for virtio-scsi and ceph with cephx auth

2016-12-20 Thread Erik McCormick
On Tue, Dec 20, 2016 at 11:57 AM, Mike Lowe  wrote:
> I got a rather nasty surprise upgrading from CentOS 7.2 to 7.3.  As far as I 
> can tell the libvirt 2.0.0 that ships with 7.3 doesn’t behave the same way as 
> the 1.2.17 that ships with 7.2 when using ceph with cephx auth during volume 
> attachment using virtio-scsi.  It looks like it fails to add the cephx 
> secret.  The telltale signs are "No secret with id 'scsi0-0-0-1-secret0’” in 
> the /var/log/libvirt/qemu instance logs.  I’ve filed a bug here 
> https://bugzilla.redhat.com/show_bug.cgi?id=1406442 and there is a libvirt 
> mailing list  thread about a fix for libvirt 2.5.0 for what looks like this 
> same problem 
> https://www.redhat.com/archives/libvir-list/2016-October/msg00396.html  I’m 
> out of ideas for workarounds having had kind of a disastrous attempt at 
> downgrading to libvirt 1.2.17, so if anybody has any suggestions I’m all ears.
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

What's extra annoying is that I can't find RPM's for the old libvirt
version (1.2.17-13) that I'd been using previously. I have KVM that I
build from SRPM a while ago, but not libvirt. I can find the libvirt
SRPM, but my ceph libs are still 0.80.11 and the librados2-devel and
librbd1-devel packages that the libvirt SRPM depends on are no longer
available. I get not supporting older releases, but why delete it
outright?

From another post, it also sounds like there's an issue with this
release when running with cpu_mode=host-model, which I am, so that'll
be another mess. This is what I get for not running a local repo I
guess.

If anyone has copies of libvirt-1.2.17-13 RPMs for EL7 lying about,
please do post a link. I would be very grateful. I'm trying to roll
out a few new computes and it sounds like I'll be running into this
without them.

-Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] operators meetups team meetings

2016-12-06 Thread Erik McCormick
+1 for later from me as well. I could also probably swing 23:00 if
we're trying to accommodate Shintaro and he can be online at 7am Tokyo
time. 12:30 or 13:00 would be OK also but that would make things
really rough for any west coast people.

On Tue, Dec 6, 2016 at 10:33 AM, Matt Jarvis  wrote:
> +1 from me for 1500 UTC, or other time later or earlier :)
>
> On Tue, Dec 6, 2016 at 3:07 PM, Chris Morgan  wrote:
>>
>> Today's meeting was held at 14:00 UTC, minutes here
>>
>> http://eavesdrop.openstack.org/meetings/ops_meetups_team/2016/ops_meetups_team.2016-12-06-14.25.html
>>
>> Actually though the meeting started late and was thinly attended. I would
>> like to ask attendees and prospective attendees whether moving this regular
>> meeting might help attendance. There are several in favor of pushing it to
>> 15:00 UTC, although this places some degree of hardship on at least one
>> other attendee.
>>
>> Barring major developments, the next meeting will proceed the same, 14:00
>> UTC next Tuesday, December 13th.
>>
>> --
>> Chris Morgan 
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads

2016-11-01 Thread Erik McCormick
On Oct 31, 2016 1:13 PM, "Jesse Keating" <omg...@us.ibm.com> wrote:
>
> This has been my experience as well. I purposefully attend the project
fishbowl sessions and often the project mid cycles to be able to provide
real time operator feedback as future plans are discussed and
retrospectives from the previous release are held.

Right but the point is thise fishbowl won't be held at the PTG as I
understand it. It's more like Friday at the Design Summit. Those fishbowl
type sessions are meant to occur at the Forum.

>
> Given a choice between attending either the Ops mid cycle or the PTG, I
see far more value in the PTG, which will be held at a very similar time.
> -jlk
>
>
Maybe you should ask over in the dev list and see if that would be at all
appreciated or useful. I'd be interested in their perspective.

-Erik
>>
>> - Original message -
>> From: Matthias Runge <mru...@redhat.com>
>> To: openstack-operators@lists.openstack.org
>> Cc:
>> Subject: Re: [Openstack-operators] 2017 Openstack Operators Mid-Cycle
Meetups - venue selection etherpads
>> Date: Thu, Oct 27, 2016 5:33 AM
>>
>> On 27/10/16 11:08, Erik McCormick wrote:
>> > The PTG is for devs to get together and get real work done. We would be
>> > a distraction from that goal. They will also be attending the forum
>> > which will run with the summits and will be able to spend more time in
>> > groups with ops for requirements gathering and such.
>> >
>> > -Erik
>> >
>> >
>> > On Oct 27, 2016 11:05 AM, "Jesse Keating" <omg...@us.ibm.com
>> > <mailto:omg...@us.ibm.com>> wrote:
>> >
>> > I may have missed something, but why aren't we meeting at the
>> > Project Technical Gathering, which is at the end of February in
Atlanta?
>>
>> From my experience with OpenStack, feedback from operators have been
>> invaluable.
>>
>> You can easily run things in devstack (or all-in-one deployments), but
>> this is completely different from running in scale. Operators do tell
>> you, were the pain-points are. Having a dedicated gathering without
>> involving actual operators/users is not that useful IMO.
>>
>> Matthias
>> --
>> Matthias Runge <mru...@redhat.com>
>>
>> Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
>> Commercial register: Amtsgericht Muenchen, HRB 153243,
>> Managing Directors: Charles Cachera, Michael Cunningham,
>> Michael O'Neill, Eric Shander
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] 2017 Openstack Operators Mid-Cycle Meetups - venue selection etherpads

2016-10-27 Thread Erik McCormick
The PTG is for devs to get together and get real work done. We would be a
distraction from that goal. They will also be attending the forum which
will run with the summits and will be able to spend more time in groups
with ops for requirements gathering and such.

-Erik

On Oct 27, 2016 11:05 AM, "Jesse Keating"  wrote:

> I may have missed something, but why aren't we meeting at the Project
> Technical Gathering, which is at the end of February in Atlanta?
>
> I understand that this mid-cycle is targeting EU, which is totally
> awesome; and if that happens, will there also be operator focused sessions
> and such at the PTG?
> -jlk
>
>
>
> - Original message -
> From: Tom Fifield 
> To: Chris Morgan , OpenStack Operators <
> OpenStack-operators@lists.openstack.org>
> Cc:
> Subject: Re: [Openstack-operators] 2017 Openstack Operators Mid-Cycle
> Meetups - venue selection etherpads
> Date: Tue, Oct 25, 2016 6:47 PM
>
> Reminder!
>
> If you're interested in hosting the Feb/March Ops Meetup, get your
> proposal in by November 7th! Feel free to ask for help :)
>
>
> Regards,
>
>
>
> Tom
>
> On 廿十六年十月廿日 暮 11:51, Chris Morgan wrote:
> > Hello Everyone,
> >
> > Here are etherpads for the collection of venue hosting proposals and
> > assessment:
> >
> > https://etherpad.openstack.org/p/ops-meetup-venue-discuss-spring-2017
> > https://etherpad.openstack.org/p/ops-meetup-venue-discuss-aug-2017
> >
> > For your reference, the previous etherpad (for august 2016 was
> > eventually was decided to be in NYC) was :
> >
> > https://etherpad.openstack.org/p/ops-meetup-venue-discuss
> >
> > --
> > Chris Morgan  >>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [gnocchi] monitoring storage use case inquiry

2016-08-01 Thread Erik McCormick
On Jul 31, 2016 8:32 PM, "Sam Morrison"  wrote:
>
> Hi Gordon,
>
> We are using the influxDB backend and we have our retention policies set
to:
>
> Every minute for an hour
> Every 10 minutes for a day
> Every hour for a year
>
> Currently we hover around 8,000 instances.
>
> We understand the influxDB driver was taken out of gnocchi, bit annoyed
as it wasn’t mentioned on the operators list.
> We currently have just got it working with version 2.1 of gnocchi and are
keen to see if it can be added back into gnocchi.
>
> We already use influxDB for other non openstack stuff and so would rather
use that as opposed to adding yet another system.
>
> Would be interested to know what other operators use gnocchi and what
backend they use.
>
I haven't implemented Gnocchi yet, but it's on my to do list. We are using
Influxdb for our collectd data and had planned to use that for Gnocchi as
well. +1 for putting the driver back in as I didn't know it was gone :(.

-Erik

> Cheers,
> Sam
>
>
> > On 29 Jul 2016, at 11:30 PM, gordon chung  wrote:
> >
> > hi folks,
> >
> > the Gnocchi dev team is working on pushing out a new serialization
> > format to improve disk footprint and while we're at it, we're looking at
> > other changes as well. to get a bit more insight to help decide what
> > changes we make, one useful metric would be to know what your
> > requirements are for storing data. as you may know Gnocchi does not
> > store raw datapoints but aggregates data to a specified granularity (eg.
> > 5s, 30s, 1min, 1 day, etc...). what we're after is what's the longest
> > timeseries you're capturing or hoping to capture? a datapoint every
> > minute for a day/week/month/year? a datapoint every 10mins for a
> > week/month/year? something else?
> >
> > your feedback would be greatly appreciated.
> >
> > cheers,
> >
> > --
> > gord
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Reaching VXLAN tenant networks from outside (without floating IPs)

2016-07-18 Thread Erik McCormick
I've recently gone through provisioning Midonet (open source version) with
the intent of tying their vxlan gateway in with my Cumulus switches. This
approach should be usable with pretty much any vxlan-capable switch. If
you're open to straying from the well travelled OVS/LB path, you may want
to consider checking it out

-Erik

On Jul 18, 2016 8:51 PM, "Gustavo Randich" 
wrote:

> Right Blair, we've considered provider vlans, but we wanted to leverage
> the low cost of private IPs (from a hardware switch perspective), taking
> into account that we'll have thousands of VMs not needing external access.
>
> On Sunday, 17 July 2016, Blair Bethwaite 
> wrote:
>
>> On 30 June 2016 at 05:17, Gustavo Randich 
>> wrote:
>> >
>> > - other?
>>
>> FWIW, the other approach that might be suitable (depending on your
>> project/tenant isolation requirements) is simply using a flat provider
>> network (or networks, i.e., VLAN per project) within your existing
>> managed private address space, then you have no requirement for a
>> Neutron router. This model seems a lot easier to visualise when
>> starting out with Neutron and can side-step a lot of integration
>> problems.
>>
>> --
>> Cheers,
>> ~Blairo
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][osops] tools-contrib is open for business!

2015-11-19 Thread Erik McCormick
+1 for the "unless otherwise stated" bit. I seem to recall some
non-standard requirements from the likes of HP. Apache should be a good
default though.

-Erik
On Nov 19, 2015 11:31 PM, "Matt Fischer"  wrote:

> Is there a reason why we can't license the entire repo with Apache2 and if
> you want to contribute you agree to that? Otherwise it might become a bit
> of a nightmare.  Or maybe at least do "Apache2 unless otherwise stated"?
>
> On Thu, Nov 19, 2015 at 9:17 PM, Joe Topjian  wrote:
>
>> Thanks, JJ!
>>
>> It looks like David Wahlstrom submitted a script and there's a question
>> about license.
>>
>> https://review.openstack.org/#/c/247823/
>>
>> Though contributions to contrib do not have to follow a certain coding
>> style, can be very lax on error handling, etc, should they at least mention
>> a license? Thoughts?
>>
>>
>> On Wed, Nov 18, 2015 at 2:38 PM, JJ Asghar  wrote:
>>
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA512
>>>
>>>
>>> Hey everyone,
>>>
>>> I just want to announce that tools-contrib[1] is now open for
>>> submissions. Please take a moment to read the README[2] to get
>>> yourself familiar with it. I'm hoping to see many scripts and tools
>>> start to trickle in.
>>>
>>> Remember, by committing to this repository, even a simple bash script
>>> you wrote, you're helping out your future Operators. This is for your
>>> future you, and our community, so treat em nice ;)!
>>>
>>> [1]: https://github.com/openstack/osops-tools-contrib
>>> [2]:
>>> https://github.com/openstack/osops-tools-contrib/blob/master/README.rst
>>>
>>> - --
>>> Best Regards,
>>> JJ Asghar
>>> c: 512.619.0722 t: @jjasghar irc: j^2
>>> -BEGIN PGP SIGNATURE-
>>> Version: GnuPG/MacGPG2 v2
>>> Comment: GPGTools - https://gpgtools.org
>>>
>>> iQIcBAEBCgAGBQJWTO+/AAoJEDZbxzMH0+jTRxQQAK2DJdCTnihR7YJhJAXgbdIn
>>> NZizqkK4lEhnfdis0XZJekofAib7NytuAtTuWUQOTLQaFv02UAnMqSyX5ofX42PZ
>>> mGaLtZ452k+EhdeJprO5254fka8VSaRvFOZUJg0K0QjZrj5qFwtG0T1yqVBBCQmI
>>> wdUkxBB/cL8M0Ve6LaQNS4vmx03ZC81FLEtVX2O62EV8FrP8sxuXc7XDTCRbLnhR
>>> rb2HJC7R9/AZtr2gjwr7id714QFEEAgCKca79l+vsaE3VRfy+KbHsKqY9vPrxPVn
>>> qqXLQOm8ZDgXedjxYraCDBbay/FQqVrsEt/0RiAKrtAIRbLm2ZkiR/XL6J3BtNzi
>>> 2sNt12m/VkrMv9zWUT/8oqiBb73eg3TbUipVeKmh4TD12KK16EYMSF+mH9T7DY2Z
>>> eP2AT6XEs+BDohP+I3L7WM5r/AKl9r40ulLEqRR7y+jcn5qwAOEb+UzUpna4wTt/
>>> mZD5UNNemoN5h2P4eMPpfnZnpNcy4Qe/qoohZdAov4Gvdm3tmbG9jIzUKF3Q9Av5
>>> Uqpe6gUcp3Qd2EaKYGR47B2f+QRLlTs9Sk5lLBJSyOxpA53KcK9125fS0YM6VMVQ
>>> wETlxAggnmt4diwSoJt8VSYrqXlieo7eHkjv/s4hSGIcYBqtkCPZnNPliJmvMmfh
>>> s/wsl6ICrB7oe55ghDbM
>>> =EWDz
>>> -END PGP SIGNATURE-
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OPs Midcycle location discussion.

2015-11-18 Thread Erik McCormick
I'm still fishing for more specific details, but here is a snapshot of
how the Ceph Development Summit is handled.

http://tracker.ceph.com/projects/ceph/wiki/CDS_Jewel

It was previously done via Google Hangouts, but is now done using
Bluejeans. This is interesting especially since I believe Bluejeans is
an Openstack operator. I wonder if there's anyone from there on this
list that might be able to chime in with useful suggestions for us?
:).

Cheers,
Erik

On Tue, Nov 17, 2015 at 3:40 PM, Matt Jarvis
 wrote:
> I agree with all of the points that are being raised, but the inverse has
> been true for most of the European operators at every other midcycle. And
> the same presumably applies to the Asian operators. As OpenStack goes
> global, we need to find ways of bringing all those voices into the
> conversation. There's no doubt that this first meetup in Europe may not be
> as productive, well organised, integrated into the development workflow etc.
> etc as the US ones have been given that this is the first time many of the
> European operators will have got together, but you've got to start
> somewhere. What we really need is participation from the wider operators
> community from folks who know the format and can help structure things and
> contribute to the discussion, and if some of that needs to be done remotely
> then let's try and facilitate that. If it doesn't work then it doesn't work
> - at least we learnt something along the way.
>
>
> On 17 November 2015 at 20:16, JJ Asghar  wrote:
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA512
>>
>> On 11/17/15 1:34 PM, Matt Jarvis wrote:
>> > From my perspective we're happy to put in place anything that we
>> > can reasonably do, and that will increase participation. Bear in
>> > mind that we don't have massive amounts of money or people - the
>> > costs of the event as it stands is just about covered by the
>> > sponsors we have, and I'm putting most of the logistics together (
>> > with help from Tom Fifield and a few others ) as well as doing my
>> > day job.
>> >
>> > We've already had a very kind additional offer of infrastructure
>> > help ( in kit, bodies, connectivity etc ) from Canonical, if we
>> > want to put in place stuff to enable remote participation in terms
>> > of audio streaming, IRC etc. etc. If only we had some OpenStack
>> > public cloud then it would be trivial to spin up whatever servers
>> > we might need to support that  ;)
>> >
>> > It would probably be helpful to have some gauge on how many people
>> > are interested in remote participation before we do anything in
>> > terms of enabling that in a more extensive way than etherpads - not
>> > sure the best way to get that number, any ideas ?
>>
>>
>> I'd like to throw in this thought. I've been to each mid-cycle since
>> San Antonino. My experience with them has been extremely variable but
>> the chance to be in the _same_ room as the people I talk to daily is
>> invaluable. I think my boss calls it the "hallway track" is more
>> important then anything for me. Not to mention the ability to get a
>> beverage after the official meeting time and come to a conclusion.
>> (This has happened more often then anything for me.)
>>
>> It's great that we are trying to get people that can't travel involved
>> but the harsh reality of it is that no matter what we do they will
>> still miss out on some of the conversations.
>>
>>
>> - --
>> Best Regards,
>> JJ Asghar
>> c: 512.619.0722 t: @jjasghar irc: j^2
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG/MacGPG2 v2
>> Comment: GPGTools - https://gpgtools.org
>>
>> iQIcBAEBCgAGBQJWS4suAAoJEDZbxzMH0+jTviEP/3H7V8nWJP7SH1nl10b0romk
>> c/4kZOtZcF/MtMO0I+QijHI/GnndP0+MrvffdS3b3D63yeifB8oT709mx7BugWK3
>> alRp6hprxExtq6ZVcsgjgVJQn8dQYtjr/R8eZ2VYYwlmULL1Mite8NJmRVGcT1Pk
>> ENL3xIrkVq/M4ytrJ5yfmPSEzOo6S8w8EXinfgWREbEhCxXbFIKQPl27XdsXzzjF
>> S0qK5hvEqAdnwAuq133UDQZ3g+vpj24NGdSIP0z02Mhgf+FUgeHbchFMNk+5jLZ+
>> lzaUYriH4EHNb1xTvgnxaa+/L/z9gBWV6yoCikrL/HQ4lKxCDqcFTGljLUuNfrQK
>> ZsRI6GepjAJ0iZFjZJozFC9yHEKxMWnZeIYGVQj+dqEy/mzjWXKfuah3aMxnS+3w
>> /HHdJABKG57zwyKX+Iaoa0jZvaNzOqt4qnxVllYiudDz7vSWKzJEjsDYww2iRkBu
>> CVnlZF2WmI4y+2/wPwUlu79Ey/gkcZ21axi8C/lYlj/7lvhAm/xUNM57gm66Ekfm
>> ogVr5gkVs+wcHRkEALRaGQbZvh4SXzEywhJ++24ceErKi0ktzmIqPuDrCIPYq8cf
>> pDTut3uPDZIAkrQ+LcCIX4k9x9N4PK+sEHZPVm7zlrS4nPMYegTsHv9PmaItT6dT
>> 6hUExN44OK3QfZBdLqCm
>> =Sbkt
>> -END PGP SIGNATURE-
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> --
> Matt Jarvis
> Head of Cloud Computing
> DataCentred
> Office: (+44)0161 8703985
> Mobile: (+44)07983 725372
> Email: matt.jar...@datacentred.co.uk
> Website: http://www.datacentred.co.uk
>
> DataCentred Limited registered in England and Wales no. 05611763
>
> ___
> 

Re: [Openstack-operators] OPs Midcycle location discussion.

2015-11-17 Thread Erik McCormick
On Tue, Nov 17, 2015 at 1:24 PM, Donald Talton  wrote:
> If only there we some kind of chat medium where people could listen in to
> the live meetup and follow along in chat…
>
>
>
> Seriously though, how hard is it to find/designate someone as an IRC
> translator for the main points of discussion? Probably not ideal, but better
> than nothing. There has to be some solution we can find to increase
> partipation from outside Europe.
>
>
>
> Saying that we can add to an etherpad before the meetup starts is
> exclusionary to everyone who would attend if not for distance. I know
> conversely this applies to EU participants when we hold this in NA. Really,
> as technologists, can we not sort this out? It’s a simple problem,
> considering the scope of everything else we do.
>

We're deciding not to innovate a solution to allow people to
participate in a group that is attempting to provide innovative ideas.
How ironic. I actually don't think it would require much innovation.
The Ceph guys run their entire design summit remotely, and I'm certain
that it way beyond one or two people. If anyone has participated in
that process, pointers would be welcome. If not, we can certainly post
to their list and ask for suggestions. I imagine Sage might pipe up
with some interesting thoughts at the very least.

>
>
>
> From: tadow...@gmail.com [mailto:tadow...@gmail.com] On Behalf Of Matt
> Fischer
> Sent: Tuesday, November 17, 2015 10:48 AM
> To: Donald Talton
> Cc: Joe Topjian; Jonathan Proulx; openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] OPs Midcycle location discussion.
>
>
>
> On Mon, Nov 16, 2015 at 1:00 PM, Donald Talton 
> wrote:
>
> I’ll +1 option 1 too, if we can get remote participation that would suffice.
>
>
>
>
>
> Having been to several of these I think that we can call remote
> participation a stretch goal at best, and if I'm being honest, I just don't
> think it's going to be very feasible.
>
>
>
> It's often times difficult enough to follow the conversation in a room with
> 100 people, some speaking without a mic; not sure how a remote person can be
> expected to jump into that type of discussion. Perhaps different for
> smaller, focused WGs sitting around a conference table would work for some
> remote participation? I think for main sessions the best you can hope for is
> someone adding to the etherpad before the discussion (this is my plan for
> the UK midcycle). Not physically being there also puts you at a timezone
> disadvantage and for me it's sometimes difficult to connect from my "real
> job".
>
>
>
> We do these in-person because there's a benefit to being in-person and I
> don't want to detract from that with conference lines etc.
>
>
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are addressed.
> If you have received this email in error please delete it immediately.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

-Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OPs Midcycle location discussion.

2015-11-17 Thread Erik McCormick
On Tue, Nov 17, 2015 at 2:07 PM, Jesse Keating <j...@bluebox.net> wrote:
> Lets calm down the negative positioning here.
>
Sorry, I wasn't trying to be uppity. I was merely suggesting that we
not throw in the towel without giving it some thought and at least
come up with some sort of pilot program to try out. We certainly
should label any attempt at remote participation as experimental.

> Matt offered his experience in trying what exists today. Others second it
> (including me). That's not putting a stake in the ground and claiming "THOU
> SHALT NOT PERMIT REMOTE PARTICIPATION".  It's offering an opinion, which is
> what the point of these threads are. Please have some respect for the
> opinion.
>
Opinions are what it's all about, but I try to avoid saying "this
won't work" when there are examples in the wild of such things working
that we could potentially draw on.

> Nobody is saying we can't innovate. In fact, it was suggested to make it a
> stretch goal. So... stretch. Try something. If it works, great, if not, be
> prepared for it to not work, and don't derail the in-person activities to
> try and make a trial work.
>
Nothing we try should cause disruption to the in-person process. I
think we can all agree on that. A trial is exactly what it should be.
I will put a message out on the Ceph list and find out what their
experiences are with it. If we try and can't solve it this time
around, then we have something to strive for next time. I think we can
all agree on a goal of extending participation as best we can without
creating an impediment to getting real work done.

>
> - jlk
>
> On Tue, Nov 17, 2015 at 10:45 AM, Erik McCormick
> <emccorm...@cirrusseven.com> wrote:
>>
>> On Tue, Nov 17, 2015 at 1:24 PM, Donald Talton <donaldtal...@fico.com>
>> wrote:
>> > If only there we some kind of chat medium where people could listen in
>> > to
>> > the live meetup and follow along in chat…
>> >
>> >
>> >
>> > Seriously though, how hard is it to find/designate someone as an IRC
>> > translator for the main points of discussion? Probably not ideal, but
>> > better
>> > than nothing. There has to be some solution we can find to increase
>> > partipation from outside Europe.
>> >
>> >
>> >
>> > Saying that we can add to an etherpad before the meetup starts is
>> > exclusionary to everyone who would attend if not for distance. I know
>> > conversely this applies to EU participants when we hold this in NA.
>> > Really,
>> > as technologists, can we not sort this out? It’s a simple problem,
>> > considering the scope of everything else we do.
>> >
>>
>> We're deciding not to innovate a solution to allow people to
>> participate in a group that is attempting to provide innovative ideas.
>> How ironic. I actually don't think it would require much innovation.
>> The Ceph guys run their entire design summit remotely, and I'm certain
>> that it way beyond one or two people. If anyone has participated in
>> that process, pointers would be welcome. If not, we can certainly post
>> to their list and ask for suggestions. I imagine Sage might pipe up
>> with some interesting thoughts at the very least.
>>
>> >
>> >
>> >
>> > From: tadow...@gmail.com [mailto:tadow...@gmail.com] On Behalf Of Matt
>> > Fischer
>> > Sent: Tuesday, November 17, 2015 10:48 AM
>> > To: Donald Talton
>> > Cc: Joe Topjian; Jonathan Proulx;
>> > openstack-operators@lists.openstack.org
>> > Subject: Re: [Openstack-operators] OPs Midcycle location discussion.
>> >
>> >
>> >
>> > On Mon, Nov 16, 2015 at 1:00 PM, Donald Talton <donaldtal...@fico.com>
>> > wrote:
>> >
>> > I’ll +1 option 1 too, if we can get remote participation that would
>> > suffice.
>> >
>> >
>> >
>> >
>> >
>> > Having been to several of these I think that we can call remote
>> > participation a stretch goal at best, and if I'm being honest, I just
>> > don't
>> > think it's going to be very feasible.
>> >
>> >
>> >
>> > It's often times difficult enough to follow the conversation in a room
>> > with
>> > 100 people, some speaking without a mic; not sure how a remote person
>> > can be
>> > expected to jump into that type of discussion. Perhaps different for
>> > smaller, focused WGs sitting around a conference table would work for
>> > some
>> > remote participation? I think for main sessio

Re: [Openstack-operators] OPs Midcycle location discussion.

2015-11-16 Thread Erik McCormick
I thought we were working toward a regional approach rather than
having an "official" single meetup. Are you proposing to scrap the
North America meetup entirely? What does official vs. unofficial
entail?

-Erik

On Mon, Nov 16, 2015 at 10:50 AM, Jonathan Proulx  wrote:
> Hi All,
>
> 1st User Committee IRC meeting will be today at 19:00UTC on
> #openstack-meeting, we haven't exactly settled on an agenda yet but I
> hope to raise this issue the...
>
> It has been suggested that we make the February 15-16 European Ops
> Meetup in Manchester UK [1] the 'official' OPs Midcycle.  Previously
> all mid cycles have been US based.
>
> Personally I like the idea of broadening or geographic reach rather
> than staying concentrated in North America. I particularly like it
> being 'opposite' the summit location.
>
> This would likely trade off some depth of participation as fewer
> of the same people would be able to travel to all midcycles in person.
>
> Discuss...(also come by  #openstack-meeting at 19:00 UTC if you think
> this needs real time discussion)
>
> -Jon
>
>
> --
>
> 1. 
> http://www.eventbrite.com/e/european-openstack-operators-meetup-tickets-19405855436?aff=es2
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] CentOS 7 KVM and QEMU 2.+

2015-11-12 Thread Erik McCormick
I've been building these and running them on CentOS for a while,
mainly to get RBD support. They work fine.

http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/


On Thu, Nov 12, 2015 at 12:18 PM, Arne Wiebalck  wrote:
> Hi,
>
> What about the CentOS Virt SIG’s repo at
>
> http://mirror.centos.org/centos-7/7/virt/x86_64/kvm-common/
>
> (and the testing repos at:
> http://buildlogs.centos.org/centos/7/virt/x86_64/kvm-common/ )?
>
> These contain newer versions of the qemu-* packages.
>
> Cheers,
>  Arne
>
> —
> Arne Wiebalck
> CERN IT
>
>
>
>> On 12 Nov 2015, at 17:54, Leslie-Alexandre DENIS  wrote:
>>
>> Hello guys,
>>
>> I'm struggling at finding a qemu(-kvm) version up-to-date for CentOS 7 with 
>> official repositories
>> and additional EPEL.
>>
>> Currently the only package named qemu-kvm in these repositories is 
>> *qemu-kvm-1.5.3-86.el7_1.8.x86_64*, which is a bit outdated.
>>
>> As what I understand QEMU merged the forked qemu-kvm into the base code 
>> since 1.3 and the Kernel is shipped with KVM module. Theoretically we can 
>> just install qemu 2.+ and load KVM in order to use nova-compute with KVM 
>> acceleration, right ?
>>
>> The problem is that the packages openstack-nova{-compute} have a 
>> dependencies with qemu-kvm. For example Fedora ships qemu-kvm as a 
>> subpackage of qemu and it appears to be the same in fact, not the forked 
>> project [1].
>>
>>
>>
>> In a word, guys how do you manage to have a QEMU v2.+ with latest libvirt on 
>> your CentOS computes nodes ?
>> Is somebody using the qemu packages from oVirt ? [2]
>>
>>
>> Thanks,
>> See you
>>
>>
>> ---
>>
>> [1] https://apps.fedoraproject.org/packages/qemu-kvm
>> [2] http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/x86_64/
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Erik McCormick
On Fri, Nov 6, 2015 at 12:28 PM, Mark Baker  wrote:
> Worth mentioning that OpenStack releases that come out at the same time as
> Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
> supported for 5 years by Canonical so are already kind of an LTS. Support in
> this context means patches, updates and commercial support (for a fee).
> For paying customers 3 years of patches, updates and commercial support for
> April releases, (Kilo, O, Q etc..) is also available.
>

Does that mean that you are actually backporting and gate testing
patches downstream that aren't being done upstream? I somehow doubt
it, but if so, then it would be great if you could lead some sort of
initiative to push those patches back upstream.


-Erik

>
>
> Best Regards
>
>
> Mark Baker
>
> On Fri, Nov 6, 2015 at 5:03 PM, James King  wrote:
>>
>> +1 for some sort of LTS release system.
>>
>> Telcos and risk-averse organizations working with sensitive data might not
>> be able to upgrade nearly as fast as the releases keep coming out. From the
>> summit in Japan it sounds like companies running some fairly critical public
>> infrastructure on Openstack aren’t going to be upgrading to Kilo any time
>> soon.
>>
>> Public clouds might even benefit from this. I know we (Dreamcompute) are
>> working towards tracking the upstream releases closer… but it’s not feasible
>> for everyone.
>>
>> I’m not sure whether the resources exist to do this but it’d be a nice to
>> have, imho.
>>
>> > On Nov 6, 2015, at 11:47 AM, Donald Talton 
>> > wrote:
>> >
>> > I like the idea of LTS releases.
>> >
>> > Speaking to my own deployments, there are many new features we are not
>> > interested in, and wouldn't be, until we can get organizational (cultural)
>> > change in place, or see stability and scalability.
>> >
>> > We can't rely on, or expect, that orgs will move to the CI/CD model for
>> > infra, when they aren't even ready to do that for their own apps. It's 
>> > still
>> > a new "paradigm" for many of us. CI/CD requires a considerable engineering
>> > effort, and given that the decision to "switch" to OpenStack is often 
>> > driven
>> > by cost-savings over enterprise virtualization, adding those costs back in
>> > via engineering salaries doesn't make fiscal sense.
>> >
>> > My big argument is that if Icehouse/Juno works and is stable, and I
>> > don't need newer features from subsequent releases, why would I expend the
>> > effort until such a time that I do want those features? Thankfully there 
>> > are
>> > vendors that understand this. Keeping up with the release cycle just for 
>> > the
>> > sake of keeping up with the release cycle is exhausting.
>> >
>> > -Original Message-
>> > From: Tony Breeds [mailto:t...@bakeyournoodle.com]
>> > Sent: Thursday, November 05, 2015 11:15 PM
>> > To: OpenStack Development Mailing List
>> > Cc: openstack-operators@lists.openstack.org
>> > Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for
>> > longer.
>> >
>> > Hello all,
>> >
>> > I'll start by acknowledging that this is a big and complex issue and I
>> > do not claim to be across all the view points, nor do I claim to be
>> > particularly persuasive ;P
>> >
>> > Having stated that, I'd like to seek constructive feedback on the idea
>> > of keeping Juno around for a little longer.  During the summit I spoke to a
>> > number of operators, vendors and developers on this topic.  There was some
>> > support and some "That's crazy pants!" responses.  I clearly didn't make it
>> > around to everyone, hence this email.
>> >
>> > Acknowledging my affiliation/bias:  I work for Rackspace in the private
>> > cloud team.  We support a number of customers currently running Juno that
>> > are, for a variety of reasons, challenged by the Kilo upgrade.
>> >
>> > Here is a summary of the main points that have come up in my
>> > conversations, both for and against.
>> >
>> > Keep Juno:
>> > * According to the current user survey[1] Icehouse still has the
>> >   biggest install base in production clouds.  Juno is second, which
>> > makes
>> >   sense. If we EOL Juno this month that means ~75% of production clouds
>> >   will be running an EOL'd release.  Clearly many of these operators
>> > have
>> >   support contracts from their vendor, so those operators won't be left
>> >   completely adrift, but I believe it's the vendors that benefit from
>> > keeping
>> >   Juno around. By working together *in the community* we'll see the best
>> >   results.
>> >
>> > * We only recently EOL'd Icehouse[2].  Sure it was well communicated,
>> > but we
>> >   still have a huge Icehouse/Juno install base.
>> >
>> > For me this is pretty compelling but for balance 
>> >
>> > Keep the current plan and EOL Juno Real Soon Now:
>> > * There is also no ignoring the elephant in the room that with HP
>> > stepping
>> >   back from public cloud there are questions about 

Re: [Openstack-operators] Informal Ops Meetup?

2015-10-29 Thread Erik McCormick
Which table are you all at?
On Oct 29, 2015 11:53 PM, "Belmiro Moreira" <
moreira.belmiro.email.li...@gmail.com> wrote:

> +1
>
> Belmiro
>
>
>
>
> On Thursday, 29 October 2015, Kris G. Lindgren 
> wrote:
>
>> We seem to have enough interest… so meeting time will be at 10am in the
>> Prince room (if we get an actual room I will send an update).
>>
>> Does anyone have any ideas about what they want to talk about?  I am
>> pretty much open to anything.  I started:
>> https://etherpad.openstack.org/p/TYO-informal-ops-meetup  for tracking
>> of some ideas/time/meeting place info.
>>
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: Sam Morrison 
>> Date: Thursday, October 29, 2015 at 6:14 PM
>> To: "openstack-operators@lists.openstack.org" <
>> openstack-operators@lists.openstack.org>
>> Subject: Re: [Openstack-operators] Informal Ops Meetup?
>>
>> I’ll be there, talked to Tom too and he said there may be a room we can
>> use else there is plenty of space around the dev lounge to use.
>>
>> See you tomorrow.
>>
>> Sam
>>
>>
>> On 29 Oct 2015, at 6:02 PM, Xav Paice  wrote:
>>
>> Suits me :)
>>
>> On 29 October 2015 at 16:39, Kris G. Lindgren 
>> wrote:
>>
>>> Hello all,
>>>
>>> I am not sure if you guys have looked at the schedule for Friday… but
>>> its all working groups.  I was talking with a few other operators and the
>>> idea came up around doing an informal ops meetup tomorrow.  So I wanted to
>>> float this idea by the mailing list and see if anyone was interested in
>>> trying to do an informal ops meet up tomorrow.
>>>
>>> ___
>>> Kris Lindgren
>>> Senior Linux Systems Engineer
>>> GoDaddy
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Any users of Neutron's VPN advanced service?

2015-08-05 Thread Erik McCormick
I attempted to run it in Juno a while back and had very little
success. I would love to be able to use it though, and will give it
another shot once upgraded to Kilo. My issue was that several of the
options coded into it for firing up a connection were specific to
Freeswan which was deprecated, at least in CentOS 7, in favor of
Libreswan. Even after hacking in the changes, it still failed to start
due to some locking or permissions issue that I could never resolve.

Given that we run isolated tenant networks with overlapping IP space
for a number of enterprise customers, having a working self-service
VPN would be great to have, and I'm looking forward to some future
success with it.

-Erik

On Wed, Aug 5, 2015 at 3:56 PM, Kyle Mestery mest...@mestery.com wrote:
 Operators:

 We (myself, Paul and Doug) are looking to better understand who might be
 using Neutron's VPNaaS code. We're looking for what version you're using,
 how long you're using it, and if you plan to continue deploying it with
 future upgrades. Any information operators can provide here would be
 fantastic!

 Thank you!
 Kyle

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-06-02 Thread Erik McCormick
On Tue, Jun 2, 2015 at 5:34 AM, Tim Bell tim.b...@cern.ch wrote:

  I had understood that CentOS 7.1 qemu-kvm has RBD support built-in. It
 was not there on 7.0 but http://tracker.ceph.com/issues/10480 implies it
 is in 7.1.



 You could check on the centos mailing lists to be sure.



 Tim


It's about time! Thanks for the pointer Tim.

Cynthia, If for some reason it's not in the Centos ones yet, I've been
using the RHEV SRPMs and building the packages. You don't have to mess with
the spec or anything. Just run them through rpmbuild and push them out.

http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/

-Erik



 *From:* Cynthia Lopes [mailto:clsacrame...@gmail.com]
 *Sent:* 02 June 2015 10:57
 *To:* Sławek Kapłoński
 *Cc:* openstack-operators@lists.openstack.org
 *Subject:* Re: [Openstack-operators] Venom vulnerability



 Hi guys,



 I had to recompile qemu-kvm on CentOS7 to enable RBD and be able to use
 CEPH.

 Now, what is the best to update for venom vulnerability?

 Has anyone already recompiled the patched sources and put it in a
 repository, or the only way is to get the knew sources and recompile again ?

 In http://vault.centos.org/ les sources don't seen to have been updated
 yet, where will I find them to recompile if it is the way to go?



 Thanks a lot!



 Regards,

 Cynthia



 2015-05-14 23:45 GMT+02:00 Sławek Kapłoński sla...@kaplonski.pl:

 Hello,

 Ok, thx for explanations :) Yep, I know that best is to restart qemu
 process but this makes that I can now sleep littlebit more peacefully :)

 --
 Best regards / Pozdrawiam
 Sławek Kapłoński
 sla...@kaplonski.pl

 On Thu, May 14, 2015 at 05:38:56PM -0400, Favyen Bastani wrote:
  On 05/14/2015 05:23 PM, Sławek Kapłoński wrote:
   Hello,
  
   So if I understand You correct, it is not so dangeorus if I'm using
   ibvirt with apparmor and this libvirt is adding apparmor rules for
   every qemu process, yes?
  
  
 
  You should certainly verify that apparmor rules are enabled for the qemu
  processes.
 
  Apparmor reduces the danger of the vulnerability. However, if you are
  assuming that virtual machines are untrusted, then you should also
  assume that an attacker can execute whatever operations permitted by the
  apparmor rules (mostly built based on abstraction usually at
  /etc/apparmor.d/libvirt-qemu); so you should check that you have
  reasonable limits on those permissions. Best is to restart the processes
  by way of live migration or otherwise.
 
  Best,
  Favyen

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] YVR Wednesday Ops: Work sessions

2015-05-18 Thread Erik McCormick
If you have the power to put it in Sched, that would be spiffy I say.
On May 18, 2015 9:22 PM, Lauren Sell lau...@openstack.org wrote:

 Would you like us to make these updates in sched? It's no problem




 On May 18, 2015 9:11:35 PM Jonathan Proulx j...@jonproulx.com wrote:

  Hi All,

 If you look at the Sched for Wednesday there's a substantial pile of
 sessions just called Ops: Work session.

 They are all actually about pretty specific things and typically run 3
 in parallel so you have some choices to make, but it's not real easy
 to see what those are.

 I've lovingly hand crafted a list that shows what the actually are,
 when and where they are and etherpad links where available.

 Hopefully I didn't miss any!  Also if you see something interesting
 double check the rooms especially as I'm notoriously bad at
 transposing things :)

 Of course if you're not at the summit you can still preload those pads
 with your concerns!

 Wednesday May 20, 2015
 ---
 9:00am - 9:40am
 ---
 Puppet Team
* Room 217
* https://etherpad.openstack.org/p/YVR-ops-puppet

 ---
 9:00am - 10:30am
 ---
 HPC Working Group
* Room 218
*  https://etherpad.openstack.org/p/YVR-ops-hpc

 The Telco Working Group
* East Bld, 2/3
* https://etherpad.openstack.org/p/YVR-ops-telco
 ---
 9:50am - 10:30am
 ---
 Chef
* Room 217
* https://etherpad.openstack.org/p/YVR-ops-chef
 Making Metal Easy (Ironic)
* Room 218
* etherpad? anyone?
 ---
 11:00am - 11:40am
 ---
 Ansible
* Room 217
* https://etherpad.openstack.org/p/YVR-ops-ansible
 ---
 11:00am - 12:30pm
 ---
 Monitoring  Tools WG
* Room 216
* https://etherpad.openstack.org/p/YVR-ops-tools
 Ops Tags WG
* Room 218
* https://etherpad.openstack.org/p/YVR-ops-tags
 ---
 11:50am - 12:30pm
 ---
 Ceph
* Room 217
* https://etherpad.openstack.org/p/YVR-ops-ceph
 ---
 1:50pm - 2:30pm
 ---
 Tech Choices (eg is MongoDB OK?)
* Room 218
* https://etherpad.openstack.org/p/YVR-ops-tech-choices
 ---
 1:50pm - 3:20pm
 ---
 Large Deployments Team
* Room 216
* https://etherpad.openstack.org/p/YVR-ops-large-deployments
 Burning Issues
* Room 217
* https://etherpad.openstack.org/p/YVR-ops-burning-issues
 ---
 2:40pm - 3:20pm
 ---
 CMDB
* Room 218
* https://etherpad.openstack.org/p/YVR-ops-cmdb
 ---
 3:30pm - 4:10pm
 ---
 OPs Docs
* Room 217
* https://etherpad.openstack.org/p/YVR-ops-docs
 Data plane transitions
* Room 218
* https://etherpad.openstack.org/p/YVR-ops-data-plane-transitions
 ---
 4:30pm - 6:00pm
 ---
 Upgrades
* Room 216
* https://etherpad.openstack.org/p/YVR-ops-upgrades
 Logging
* Room 217
* https://etherpad.openstack.org/p/YVR-ops-logging
 Packaging
* Room 218
* https://etherpad.openstack.org/p/YVR-ops-packaging

 Hope that helps,
 -Jon

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] over commit ratios

2015-04-21 Thread Erik McCormick
We had this discussion at the Ops Mid-Cycle meetup. I think the general
consensus was 0.9 for memory. if you're running Ceph OSD's on the node
you'll almost certainly want to reserve more than a gig for it and the OS.

for CPU there was a wide range of ideas and it mainly depended on use-case.
If you're running lots of CPU-intensive things like Hadoop, better to be
around 2 or even 1. If you've got a ton of web servers, you could go with
16 easily. If you've got a mixed-use cloud, some segregation with host
aggregates may be helpful so you can vary the number. If you can't do that
though, and you're mixed, you should be able to go better than 2. At least
5 should be OK unless you've packed a massive amount of memory in each node.

-Erik

On Tue, Apr 21, 2015 at 3:59 PM, Caius Howcroft caius.howcr...@gmail.com
wrote:

 Just a general question: what kind of over commit ratios do people
 normally run in production with?

 We currently run 2 for cpu and 1 for memory (with some held back for
 OS/ceph)

 i.e.:
 default['bcpc']['nova']['ram_allocation_ratio'] = 1.0
 default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often larger
 default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0

 Caius

 --
 Caius Howcroft
 @caiushowcroft
 http://www.linkedin.com/in/caius

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack services and ca certificate config entries

2015-03-25 Thread Erik McCormick
I'll start by saying I went the system bundle route also and have thus far
had no issues with it. I'll also say that I'm using RDO packages still and
not doing anything with venvs or pip installed stuff.

On Wed, Mar 25, 2015 at 6:33 PM, Michael Still mi...@stillhq.com wrote:

 Thanks for starting this thread Jesse. I agree that heat looks like a
 good model for other projects to model themselves on here.

 Can anyone think of a use case for having a per client / driver CA
 file? I can't, but perhaps I'm missing something.

 There could potentially be instances where one service would be running
certificates issued off of one internal CA and others on another, but
really I don't see the point of splitting them out when you can concatenate
the CA certificates together and feed it in as a bundle that covers
everything.  This one section from Heat would cover everything I would
think.

[clients]
ca_file = path


 Michael

 On Thu, Mar 26, 2015 at 5:13 AM, Jesse Keating j...@bluebox.net wrote:
  We're facing a bit of a frustration. In some of our environments, we're
  using a self-signed certificate for our ssl termination (haproxy). We
 have
  our various services pointing at the haproxy for service cross-talk,
 such as
  nova to neutron or nova to glance or nova to cinder or neutron to nova or
  cinder to glance or all the things to keystone. When using a self-signed
  certificate, these services have trouble validating the cert when they
  attempt to talk to each other. This problem can be solved in a few ways,
  such as adding the CA to the system bundle (of your platform has such a
  thing), adding the CA to the bundle python requests uses (because
  hilariously it doesn't always use the system bundle), or the more direct
 way
  of telling nova, neutron, et al the direct path to the CA file.
 
  This last choice is the way we went forward, more explicit, and didn't
  depend on knowledge if python-requests was using its own bundle or the
  operating system's bundle. To configure this there are a few places that
  need to be touched.
 
  nova.conf:
  [keystone_authtoken]
  cafile = path
 
  [neutron]
  ca_certificates_file = path
 
  [cinder]
  ca_certificates_file = path
 
  (nothing for glance hilariously)
 
 
  neutron.conf
  [DEFAULT]
  nova_ca_certificates_file = path
 
  [keystone_authtoken]
  cafile = path
 
  glance-api.conf and glance-registry.conf
  [keystone_authtoken]
  cafile = path
 
  cinder.conf
  [DEFAULT]
  glance_ca_certificates_file = path
 
  [keystone_authtoken]
  cafile = path
 
  heat.conf
  [clients]
  ca_file = path
 
  [clients_whatever]
  ca_file = path
 
 
  As you can see, there are a lot of places where one would have to define
 the
  path, and the frustrating part is that the config name and section varies
  across the services. Does anybody think this is a good thing? Can anybody
  think of a good way forward to come to some sort of agreement on config
  names? It does seem like heat is a winner here, it has a default that
 can be
  defined for all clients, and then each client could potentially point to
 a
  different path, but every config entry is named the same. Can we do that
  across all the other services?
 
  I chatted a bit on twitter last night with some nova folks, they
 suggested
  starting a thread here on ops list and potentially turning it into a
 hallway
  session or real session at the Vancouver design summit (which operators
 are
  officially part of).
 
  - jlk
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 



 --
 Rackspace Australia

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-Erik
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Help with glance issue after upgrade from Icehouse to Juno

2015-03-06 Thread Erik McCormick
That looks like a database connection error, not a keystone error. Double
check your DB connection string / credentials and see if you can connect
with the mysql client.

On Fri, Mar 6, 2015 at 3:37 PM, Nathan Stratton nat...@robotics.net wrote:

 Sorry about that, never thought to look in registry.log, it points to the
 issue. However my config has admin_password={password} what could change in
 the upgrade that would now give me access denied?


 2015-03-06 14:39:19.623 2688 ERROR glance.registry.api.v1.images
 [73bbe61a-d10a-4526-a6a1-1d8a50a56d0e - - - - -] Unable to get images
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images Traceback
 (most recent call last):
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/glance/registry/api/v1/images.py, line
 122, in _get_images
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 **params)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/api.py, line 564,
 in image_get_all
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 visibility)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/api.py, line 484,
 in _select_images_query
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 session = get_session()
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/api.py, line 97, in
 get_session
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 facade = _create_facade_lazily()
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/glance/db/sqlalchemy/api.py, line 82, in
 _create_facade_lazily
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 _FACADE = session.EngineFacade.from_config(CONF)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py, line 816,
 in from_config
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 retry_interval=conf.database.retry_interval)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py, line 732,
 in __init__
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 **engine_kwargs)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py, line 409,
 in create_engine
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 _test_connection(engine, max_retries, retry_interval)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py, line 549,
 in _test_connection
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 return exc_filters.handle_connect_error(engine)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/exc_filters.py, line
 351, in handle_connect_error
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 handler(ctx)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/exc_filters.py, line
 323, in handler
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 context.is_disconnect)
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images   File
 /usr/lib/python2.7/site-packages/oslo/db/sqlalchemy/exc_filters.py, line
 254, in _raise_operational_errors_directly_filter
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images raise
 operational_error
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 OperationalError: (OperationalError) (1045, Access denied for user
 'glance'@'localhost' (using password: YES)) None None
 2015-03-06 14:39:19.623 2688 TRACE glance.registry.api.v1.images
 2015-03-06 14:39:19.624 2688 INFO glance.wsgi.server
 [73bbe61a-d10a-4526-a6a1-1d8a50a56d0e - - - - -] Traceback (most recent
 call last):
   File /usr/lib/python2.7/site-packages/eventlet/wsgi.py, line 433, in
 handle_one_response
 result = self.application(self.environ, start_response)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 130, in
 __call__
 resp = self.call_func(req, *args, **self.kwargs)



 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 On Fri, Mar 6, 2015 at 3:08 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  What about the other glance logfiles? It looks like it may be calling
 out to a different server and thats failing...
 Thanks,
 Kevin
  --
 *From:* Nathan Stratton [nat...@robotics.net]
 *Sent:* Friday, March 06, 2015 11:42 AM
 *To:* 

Re: [Openstack-operators] Instances stuck in 'Build'

2015-01-09 Thread Erik McCormick
I think you can get into this state if a download from Glance stalls. Also
maybe if you run out of disk space on the compute node while it's
downloading. There are probably others as well. Eventually though, all of
these situations should result in an ERROR state. How long it takes just
depends on what is broken.

On Fri, Jan 9, 2015 at 10:45 AM, Alex Leonhardt aleonhardt...@gmail.com
wrote:

 Thanks, I'll try and find out next time this happens.

 To be precise though, the state is BUILD and the Task stays at Build (as
 if some API call timed out??)...

 Alex


 On Fri Jan 09 2015 at 13:20:20 Erik McCormick emccorm...@cirrusseven.com
 wrote:


 On Jan 9, 2015 4:43 AM, Alex Leonhardt aleonhardt...@gmail.com wrote:
 
  Hi,
 
  recently I started having problems when creating instances - every now
 and then they seem to get stuck in a 'Build' state. After I terminate that
 instance and try again, it mostly goes through fine. This is happening
 across multiple projects/tenants and seems to affect all users.
 
  Does anyone know this issue and/or has any suggestions or links or docs
 to read up on to troubleshoot this ?
 
  Thanks!
  Alex
 
 

 It sounds like you've got one or more compute nodes misbehaving. Check to
 see shich node the failed instance got scheduled on and check logs there
 also. Try and manually boot an instance there.

 Cheers,
 Erik


  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Instances stuck in 'Build'

2015-01-09 Thread Erik McCormick
On Jan 9, 2015 4:43 AM, Alex Leonhardt aleonhardt...@gmail.com wrote:

 Hi,

 recently I started having problems when creating instances - every now
and then they seem to get stuck in a 'Build' state. After I terminate that
instance and try again, it mostly goes through fine. This is happening
across multiple projects/tenants and seems to affect all users.

 Does anyone know this issue and/or has any suggestions or links or docs
to read up on to troubleshoot this ?

 Thanks!
 Alex


It sounds like you've got one or more compute nodes misbehaving. Check to
see shich node the failed instance got scheduled on and check logs there
also. Try and manually boot an instance there.

Cheers,
Erik
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Glance image download failing in Juno with RBD store

2014-11-11 Thread Erik McCormick
Thanks! I was searching Launch pad and just couldn't find it, but I'd
started upgrading anyway out of desperation. RDO packages are way behind.

Cheers,
Erik
 On Nov 11, 2014 5:20 AM, Yaguang Tang heut2...@gmail.com wrote:

 Hi Erik,

 There is a bug in glance_store 0.1.1, which cause this issue, please
 upgrade glance_store  to latest 0.1.9 in your environment.

 On Tue, Nov 11, 2014 at 12:45 AM, Erik McCormick 
 emccorm...@cirrusseven.com wrote:

 Hi everyone,

 I've got a new deployment of Juno backed by Ceph set up and am getting a
 rather unhelpful error message when attempting to download a stored image
 out of Glance. I have no trouble uploading images or listing the image
 details, and I'm able to manually manipulate and copy the image file with
 the rbd client, so I'm fairly certain my Ceph permissions and connections
 are working properly. Any help resolving this would be greatly appreciated.
 The error is as follows:

 2014-11-10 11:43:21.300 19337 DEBUG glance.registry.client.v1.client
 [df7fad91-88f2-4e63-8f0f-22e52db33362 29c52a0d0fe0442092a4fdcac9ee5f68
 786367085098450cad38bd4aebb322f0 - - -] Registry
 request GET /images/bc388623-c6e4-49f2-a531-874617b3153b HTTP 200 request
 id req-1343c49d-b3be-470a-a5e4-a15a576bb5f5 do_request
 /usr/lib/python2.7/site-packages/glance/registry/client/v1/c
 lient.py:122
 2014-11-10 11:43:21.362 19337 INFO glance.wsgi.server
 [df7fad91-88f2-4e63-8f0f-22e52db33362 29c52a0d0fe0442092a4fdcac9ee5f68
 786367085098450cad38bd4aebb322f0 - - -] Traceback (most recent c
 all last):
   File /usr/lib/python2.7/site-packages/eventlet/wsgi.py, line 433, in
 handle_one_response
 result = self.application(self.environ, start_response)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 130, in
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 195, in
 call_func
 return self.func(req, *args, **kwargs)
   File /usr/lib/python2.7/site-packages/glance/common/wsgi.py, line
 394, in __call__
 response = req.get_response(self.application)
   File /usr/lib/python2.7/site-packages/webob/request.py, line 1296, in
 send
 application, catch_exc_info=False)
   File /usr/lib/python2.7/site-packages/webob/request.py, line 1260, in
 call_application
 app_iter = application(self.environ, start_response)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 130, in
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 195, in
 call_func
 return self.func(req, *args, **kwargs)
   File /usr/lib/python2.7/site-packages/osprofiler/web.py, line 106, in
 __call__
 return request.get_response(self.application)
   File /usr/lib/python2.7/site-packages/webob/request.py, line 1296, in
 send
 application, catch_exc_info=False)
   File /usr/lib/python2.7/site-packages/webob/request.py, line 1260, in
 call_application
 app_iter = application(self.environ, start_response)
   File
 /usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line
 748, in __call__
 return self._call_app(env, start_response)
   File
 /usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line
 684, in _call_app
 return self._app(env, _fake_start_response)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 130, in
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 195, in
 call_func
 return self.func(req, *args, **kwargs)
   File /usr/lib/python2.7/site-packages/glance/common/wsgi.py, line
 394, in __call__
 response = req.get_response(self.application)
   File /usr/lib/python2.7/site-packages/webob/request.py, line 1296, in
 send
 application, catch_exc_info=False)
   File /usr/lib/python2.7/site-packages/webob/request.py, line 1260, in
 call_application
 app_iter = application(self.environ, start_response)
   File /usr/lib/python2.7/site-packages/paste/urlmap.py, line 203, in
 __call__
 return app(environ, start_response)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 144, in
 __call__
 return resp(environ, start_response)
   File /usr/lib/python2.7/site-packages/routes/middleware.py, line 131,
 in __call__
 response = self.app(environ, start_response)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 144, in
 __call__
 return resp(environ, start_response)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 130, in
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/lib/python2.7/site-packages/webob/dec.py, line 195, in
 call_func
 return self.func(req, *args, **kwargs)
   File /usr/lib/python2.7/site-packages/glance/common/wsgi.py, line
 683, in __call__
 request, **action_args)
   File /usr/lib/python2.7/site-packages/glance/common/wsgi.py, line
 707, in dispatch
 return method(*args, **kwargs)
   File /usr/lib/python2.7

Re: [Openstack-operators] Ceph Implementations

2014-11-07 Thread Erik McCormick
On Fri, Nov 7, 2014 at 11:43 AM, MailingLists - EWS 
mailingli...@expresswebsystems.com wrote:

 Since there seems to be a fair amount of people on this list running Ceph
 with Openstack, I wanted to ask what configuration most people are using
 for their Ceph/Openstack configuration.



 We tried installing Ceph on CentOS 6 (storage nodes and nova nodes) but
 discovered that the kernel that is included in RDO (for nova) doesn’t
 include rbd support in qemu-img. No problem we thought, we will just use
 the kernel-lt from elrepo which does include support for rbd. However after
 installing those kernels we discovered that our GRE based networking didn’t
 work. After some additional troubleshooting we determined that the
 kernel-lt didn’t include the backported ovs vport GRE capability that the
 RDO kernel has (it apparently was added in 3.11 but kernel-lt is 3.10).


You shouldn't need a special kernel unless you're going to do things like
try to mount an rbd image directly to one of your nodes. I'm running my
Icehouse cluster with the stock 2.6 kernel. Same for the ceph nodes if you
run them separately. What you do need are the qemu-kvm and qemu-img
packages from the ceph-extras repo on your compute nodes as the stock
Redhat ones do not have RBD support compiled in. If you're going to run
Juno though, you'll need to go to CentOS 7 as i don't believe there are RDO
packages for 6.5. That eliminates the need for the custom qemu packages.



 So we have to decide how we want to implement our cluster and cloud.



 Are you implementing Openstack on Debian/Ubuntu to get around these sorts
 of issues?

 Are you running newer mainline kernels (kernel-ml) with CentOS 6?

 Or are you running vlan based tenant isolation for your tenant networks?

 Are you running custom compiled kernels?

 Is there something else I am missing?


I'm using GRE on Icehouse and VXLAN on Juno. Both work fine. Be aware there
is at least one bug I've run into with Juno and Glance with RBD that will
cause slow performance. It's easily remedied though either by tweaking the
code or setting your chunk size to 8192 instead of 8.

https://bugs.launchpad.net/glance/+bug/1390386




 Any input from your experience is greatly appreciated.



 Warm regards,



 Tom Walsh

 Express Web Systems

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Cheers,
Erik
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators