Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-27 Thread Saverio Proto
> We have an image promotion process that does this for us.  The command I use
> to get images from a specific tenant is:
>
> glance --os-image-api-version 1 image-list --owner=
>
> I'm sure using the v1 API will make some cringe, but I haven't found
> anything similar in the v2 API.
>

I used this solution, and it worked very nice for me.

Also openstack image list --long and then grepping the project ID
makes the work.

I would like in the long term to use only the pythonopenstack-client.
However having to pipe stuff into grep is kind of slow for large
setups. How to properly ask to developers to include something like:

openstack image list --project 

in the next release cycle ?
should I write a spec for this kind of change ?

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Storage backend for glance

2016-01-27 Thread Sławek Kapłoński
Hello,

I want to install Openstack with at least two glance nodes (to have HA)
but with local filesystem as glance storage. Is it possible to use
something like that in setup with two glance nodes? Maybe someone of You
already have something like that?
I'm asking because AFAIK is image will be stored on one glance server
and nova-compute will ask other glance host to download image then image
will not be available to download and instance will be in ERROR state.
So maybe someone used it somehow in similar setup (maybe some NFS or
something like that?). What are You experience with it?

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl



signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Hauke Bruno Wollentin
Hi Slawek,

we use a shared NFS Export to save images to Glance. That enables HA in (imho) 
the simplest way.

For your setting you could use something like a hourly/daily/whenever rsync job 
and set the 'second' Glance node to passive/standby in the load balancing. Also 
it will be possible to run some kind of cluster between the 2 nodes, like DRBD 
or GlusterFS (but that would be a bit overpowered maybe).

cheers,
hauke

-Ursprüngliche Nachricht-
Von: Sławek Kapłoński [mailto:sla...@kaplonski.pl] 
Gesendet: Mittwoch, 27. Januar 2016 15:23
An: openstack-operators@lists.openstack.org
Betreff: [Openstack-operators] Storage backend for glance

Hello,

I want to install Openstack with at least two glance nodes (to have HA) but 
with local filesystem as glance storage. Is it possible to use something like 
that in setup with two glance nodes? Maybe someone of You already have 
something like that?
I'm asking because AFAIK is image will be stored on one glance server and 
nova-compute will ask other glance host to download image then image will not 
be available to download and instance will be in ERROR state.
So maybe someone used it somehow in similar setup (maybe some NFS or something 
like that?). What are You experience with it?

--
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Joe Topjian
Yup, it's definitely possible. All Glance nodes will need to share the same
database as well as the same file system. Common ways of sharing the file
system are to mount /var/lib/glance/images either from NFS (like you
mentioned) or Gluster.

I've done both in the past with no issues. The usual caveats with shared
file systems apply: file permissions, ownership, and such. Other than that,
you shouldn't have any problems.

Hope that helps,
Joe

On Wed, Jan 27, 2016 at 3:23 PM, Sławek Kapłoński 
wrote:

> Hello,
>
> I want to install Openstack with at least two glance nodes (to have HA)
> but with local filesystem as glance storage. Is it possible to use
> something like that in setup with two glance nodes? Maybe someone of You
> already have something like that?
> I'm asking because AFAIK is image will be stored on one glance server
> and nova-compute will ask other glance host to download image then image
> will not be available to download and instance will be in ERROR state.
> So maybe someone used it somehow in similar setup (maybe some NFS or
> something like that?). What are You experience with it?
>
> --
> Best regards / Pozdrawiam
> Sławek Kapłoński
> sla...@kaplonski.pl
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] User Committee Changes

2016-01-27 Thread Shilla Saebi
Hi Everyone,

We have an update to the UC. Edgar Magana has been approved to be the board
representative to the User Committee and is replacing Subbu Allamaraju.
Welcome Edgar and we look forward to working with you!

Shilla
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Fox, Kevin M
ceph would work pretty well for that use case too. We've run a ceph with two 
ost's, with the replication set to 2, to back both cinder and glance for HA. 
Nothing complicated needed to get it working. Less complicated then drbd I 
think. You can then also easily scale it out as needed.

Thanks,
Kevin

From: Hauke Bruno Wollentin [hauke-bruno.wollen...@innovo-cloud.de]
Sent: Wednesday, January 27, 2016 7:04 AM
To: Sławek Kapłoński; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Storage backend for glance

Hi Slawek,

we use a shared NFS Export to save images to Glance. That enables HA in (imho) 
the simplest way.

For your setting you could use something like a hourly/daily/whenever rsync job 
and set the 'second' Glance node to passive/standby in the load balancing. Also 
it will be possible to run some kind of cluster between the 2 nodes, like DRBD 
or GlusterFS (but that would be a bit overpowered maybe).

cheers,
hauke

-Ursprüngliche Nachricht-
Von: Sławek Kapłoński [mailto:sla...@kaplonski.pl]
Gesendet: Mittwoch, 27. Januar 2016 15:23
An: openstack-operators@lists.openstack.org
Betreff: [Openstack-operators] Storage backend for glance

Hello,

I want to install Openstack with at least two glance nodes (to have HA) but 
with local filesystem as glance storage. Is it possible to use something like 
that in setup with two glance nodes? Maybe someone of You already have 
something like that?
I'm asking because AFAIK is image will be stored on one glance server and 
nova-compute will ask other glance host to download image then image will not 
be available to download and instance will be in ERROR state.
So maybe someone used it somehow in similar setup (maybe some NFS or something 
like that?). What are You experience with it?

--
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Committee Changes

2016-01-27 Thread Edgar Magana
Hello All,

Thank you so much Shilla and Jon for the support and confidence I am really 
looking forward to working with you as well.

This is a great opportunity and I am very excited about it. I will do my best 
to provide meaningful  feedback to the Foundation based on my experience as 
Operators and Users.

Happy Week for Everybody!

Edgar

From: Shilla Saebi mailto:shilla.sa...@gmail.com>>
Date: Wednesday, January 27, 2016 at 7:58 AM
To: OpenStack Operators 
mailto:openstack-operators@lists.openstack.org>>,
 user-committee 
mailto:user-commit...@lists.openstack.org>>
Subject: [Openstack-operators] User Committee Changes

Hi Everyone,

We have an update to the UC. Edgar Magana has been approved to be the board 
representative to the User Committee and is replacing Subbu Allamaraju. Welcome 
Edgar and we look forward to working with you!

Shilla
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Committee Changes

2016-01-27 Thread Robert Starmer
Congratulations Edgar!

Robert

On Wed, Jan 27, 2016 at 9:28 AM, Edgar Magana 
wrote:

> Hello All,
>
> Thank you so much Shilla and Jon for the support and confidence I am
> really looking forward to working with you as well.
>
> This is a great opportunity and I am very excited about it. I will do my
> best to provide meaningful  feedback to the Foundation based on my
> experience as Operators and Users.
>
> Happy Week for Everybody!
>
> Edgar
>
> From: Shilla Saebi 
> Date: Wednesday, January 27, 2016 at 7:58 AM
> To: OpenStack Operators ,
> user-committee 
> Subject: [Openstack-operators] User Committee Changes
>
> Hi Everyone,
>
> We have an update to the UC. Edgar Magana has been approved to be the
> board representative to the User Committee and is replacing Subbu
> Allamaraju. Welcome Edgar and we look forward to working with you!
>
> Shilla
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Robert Starmer
Glusterfs backend works great for shared glance, and can be configured for
a bit of redundancy at the disk level (rather than non distributed NFS,
which needs the NFS server to be present), much like the Ceph model Kevin
suggests.  Is your database also resiliant (e.g. some form of mysql
replication), then the glance-api services are effectively stateless, and
you've protected disk backend (gluster/ceph), and image catalog (mysql).

R

On Wed, Jan 27, 2016 at 8:28 AM, Fox, Kevin M  wrote:

> ceph would work pretty well for that use case too. We've run a ceph with
> two ost's, with the replication set to 2, to back both cinder and glance
> for HA. Nothing complicated needed to get it working. Less complicated then
> drbd I think. You can then also easily scale it out as needed.
>
> Thanks,
> Kevin
> 
> From: Hauke Bruno Wollentin [hauke-bruno.wollen...@innovo-cloud.de]
> Sent: Wednesday, January 27, 2016 7:04 AM
> To: Sławek Kapłoński; openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] Storage backend for glance
>
> Hi Slawek,
>
> we use a shared NFS Export to save images to Glance. That enables HA in
> (imho) the simplest way.
>
> For your setting you could use something like a hourly/daily/whenever
> rsync job and set the 'second' Glance node to passive/standby in the load
> balancing. Also it will be possible to run some kind of cluster between the
> 2 nodes, like DRBD or GlusterFS (but that would be a bit overpowered maybe).
>
> cheers,
> hauke
>
> -Ursprüngliche Nachricht-
> Von: Sławek Kapłoński [mailto:sla...@kaplonski.pl]
> Gesendet: Mittwoch, 27. Januar 2016 15:23
> An: openstack-operators@lists.openstack.org
> Betreff: [Openstack-operators] Storage backend for glance
>
> Hello,
>
> I want to install Openstack with at least two glance nodes (to have HA)
> but with local filesystem as glance storage. Is it possible to use
> something like that in setup with two glance nodes? Maybe someone of You
> already have something like that?
> I'm asking because AFAIK is image will be stored on one glance server and
> nova-compute will ask other glance host to download image then image will
> not be available to download and instance will be in ERROR state.
> So maybe someone used it somehow in similar setup (maybe some NFS or
> something like that?). What are You experience with it?
>
> --
> Best regards / Pozdrawiam
> Sławek Kapłoński
> sla...@kaplonski.pl
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-01-27 Thread Robert Starmer
You can't get rid of the "External" address as it's used to direct return
traffic to the right router node.  DVR as implemented is really just a
local NAT gateway per physical compute node.  The outside of your NAT needs
to be publicly unique, so it needs it's own address.  Some SDN solutions
can provide a truly distributed router model, because they globally know
the inside state of the NAT environment, and can forward packets back to
the internal source properly, regardless of which distributed forwarder
receives the incoming "external" packets.

If the number of external addresses consumed is an issue, you may consider
the dual gateway HA model instead of DVR.  This uses classic multi-router
models where one router takes on the task of forwading packets, and the
other device just acts as a backup.  You do still have a software
bottleneck at your router, unless you then also use one of the plugins that
supports hardware L3 (last I checked, Juniper, Arista, Cisco, etc. all
provide an L3 plugin that is HA capable), but you only burn 3 External
addresses for the router (and 3 internal network addresses per tenant side
interface if that matters).

Hope that clarifies a bit,
Robert

On Tue, Jan 26, 2016 at 4:14 AM, Carl Baldwin  wrote:

> On Thu, Jan 14, 2016 at 2:45 AM, Tomas Vondra  wrote:
> > Hi!
> > I have just deployed an OpenStack Kilo installation with DVR and expected
> > that it will consume one Public IP per network node as per
> >
> http://assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/
> ,
> > but it still eats one per virtual Router.
> > What is the correct behavior?
>
> Regardless of DVR, a Neutron router burns one IP per virtual router
> which it uses to SNAT traffic from instances that do not have floating
> IPs.
>
> When you use DVR, an additional IP is consumed for each compute host
> running an L3 agent in DVR mode.  There has been some discussion about
> how this can be eliminated but no action has been taken to do this.
>
> > Otherwise, it works as a DVR should according to documentation. There are
> > router namespaces at both compute and network nodes, snat namespaces at
> the
> > network nodes and fip namespaces at the compute nodes. Every router has a
> > router_interface_distributed and a router_centralized_snat with private
> IPs,
> > however the router_gateway has a public IP, which I would like to getr
> id of
> > to increase density.
>
> I'm not sure if it is possible to avoid burning these IPs at this
> time.  Maybe someone else can chime in with more detail.
>
> Carl
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [app-catalog] App Catalog IRC Meeting CANCELLED this week

2016-01-27 Thread Christopher Aedo
Due to scheduling conflicts and a very light agenda, there will be no
Community App Catalog IRC meeting this week.

Our next meeting is scheduled for February 4th, the agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

One thing on the agenda for the 2/4/2016 meeting is the topic of
implementing an API for the App Catalog, and whether we'll have a
strong commitment of the necessary resources to continue in the
direction agreed upon during the Tokyo summit.  If you have anything
to say on that subject please be sure to join us NEXT week!

-Christopher

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-01-27 Thread Fox, Kevin M
But there already is a second external address, the fip address that's nating. 
Is there a double nat? I'm a little confused.

Thanks,
Kevin

From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 3:20 PM
To: Carl Baldwin
Cc: OpenStack Operators; Tomas Vondra
Subject: Re: [Openstack-operators] DVR and public IP consumption

You can't get rid of the "External" address as it's used to direct return 
traffic to the right router node.  DVR as implemented is really just a local 
NAT gateway per physical compute node.  The outside of your NAT needs to be 
publicly unique, so it needs it's own address.  Some SDN solutions can provide 
a truly distributed router model, because they globally know the inside state 
of the NAT environment, and can forward packets back to the internal source 
properly, regardless of which distributed forwarder receives the incoming 
"external" packets.

If the number of external addresses consumed is an issue, you may consider the 
dual gateway HA model instead of DVR.  This uses classic multi-router models 
where one router takes on the task of forwading packets, and the other device 
just acts as a backup.  You do still have a software bottleneck at your router, 
unless you then also use one of the plugins that supports hardware L3 (last I 
checked, Juniper, Arista, Cisco, etc. all provide an L3 plugin that is HA 
capable), but you only burn 3 External addresses for the router (and 3 internal 
network addresses per tenant side interface if that matters).

Hope that clarifies a bit,
Robert

On Tue, Jan 26, 2016 at 4:14 AM, Carl Baldwin 
mailto:c...@ecbaldwin.net>> wrote:
On Thu, Jan 14, 2016 at 2:45 AM, Tomas Vondra 
mailto:von...@czech-itc.cz>> wrote:
> Hi!
> I have just deployed an OpenStack Kilo installation with DVR and expected
> that it will consume one Public IP per network node as per
> http://assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/,
> but it still eats one per virtual Router.
> What is the correct behavior?

Regardless of DVR, a Neutron router burns one IP per virtual router
which it uses to SNAT traffic from instances that do not have floating
IPs.

When you use DVR, an additional IP is consumed for each compute host
running an L3 agent in DVR mode.  There has been some discussion about
how this can be eliminated but no action has been taken to do this.

> Otherwise, it works as a DVR should according to documentation. There are
> router namespaces at both compute and network nodes, snat namespaces at the
> network nodes and fip namespaces at the compute nodes. Every router has a
> router_interface_distributed and a router_centralized_snat with private IPs,
> however the router_gateway has a public IP, which I would like to getr id of
> to increase density.

I'm not sure if it is possible to avoid burning these IPs at this
time.  Maybe someone else can chime in with more detail.

Carl

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DVR and public IP consumption

2016-01-27 Thread Robert Starmer
I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model.  The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting a
tenant's floating IP, and the question then is: Why do I need this _extra_
external address from the floating IP pool for the FIP namespace, since
it's the allocation of a tenant requested floating IP to a tenant VM that
triggers the DVR to implement the FIP namespace function in the first
place.

In both the Paris and Vancouver DVR presentations "We add distributed FIP
support at the expense of an _extra_ external address per device, but the
FIP namespace is then shared across all tenants". Given that there is no
"external" interface for the DVR interface for floating IPs until at least
one tenant allocates one, a new namespace needs to be created to act as the
termination for the tenant's floating IP.  A normal tenant router would
have an address allocated already, because it has a port allocated onto the
external network (this is the address that SNAT overloads for those
non-floating associated machines that lets them communicate with the
Internet at large), but in this case, no such interface exists until the
namespace is created and attached to the external network, so when the
floating IP port is created, an address is simply allocated from the
External (e.g. floating) pool for the interface.  And _then_ the floating
IP is allocated to the namespace as well. The fact that this extra address
is used is a part of the normal port allocation process (and default
port-security anti-spoofing processes) that exist already, and simplifies
the process of moving tenant allocated floating addresses around (the port
state for the floating namespace doesn't change, it keeps it's special mac
and address regardless of what ever else goes on). So don't think of it as
a Floating IP allocated to the DVR, it's just the DVR's local
representative for it's port on the external network.  Tenant addresses are
then "on top" of this setup.

So, in-efficient, yes.  Part of DVR history, yes.  Confusing to us mere
network mortals, yes.  But that's how I see it. And sorry for the SNAT
reference, just adding my own additional layer of "this is how it should
be"  on top.

Robert

On Wed, Jan 27, 2016 at 3:33 PM, Fox, Kevin M  wrote:

> But there already is a second external address, the fip address that's
> nating. Is there a double nat? I'm a little confused.
>
> Thanks,
> Kevin
> --
> *From:* Robert Starmer [rob...@kumul.us]
> *Sent:* Wednesday, January 27, 2016 3:20 PM
> *To:* Carl Baldwin
> *Cc:* OpenStack Operators; Tomas Vondra
> *Subject:* Re: [Openstack-operators] DVR and public IP consumption
>
> You can't get rid of the "External" address as it's used to direct return
> traffic to the right router node.  DVR as implemented is really just a
> local NAT gateway per physical compute node.  The outside of your NAT needs
> to be publicly unique, so it needs it's own address.  Some SDN solutions
> can provide a truly distributed router model, because they globally know
> the inside state of the NAT environment, and can forward packets back to
> the internal source properly, regardless of which distributed forwarder
> receives the incoming "external" packets.
>
> If the number of external addresses consumed is an issue, you may consider
> the dual gateway HA model instead of DVR.  This uses classic multi-router
> models where one router takes on the task of forwading packets, and the
> other device just acts as a backup.  You do still have a software
> bottleneck at your router, unless you then also use one of the plugins that
> supports hardware L3 (last I checked, Juniper, Arista, Cisco, etc. all
> provide an L3 plugin that is HA capable), but you only burn 3 External
> addresses for the router (and 3 internal network addresses per tenant side
> interface if that matters).
>
> Hope that clarifies a bit,
> Robert
>
> On Tue, Jan 26, 2016 at 4:14 AM, Carl Baldwin  wrote:
>
>> On Thu, Jan 14, 2016 at 2:45 AM, Tomas Vondra 
>> wrote:
>> > Hi!
>> > I have just deployed an OpenStack Kilo installation with DVR and
>> expected
>> > that it will consume one Public IP per network node as per
>> >
>> http://assafmuller.com/2015/04/15/distributed-virtual-routing-floating-ips/
>> ,
>> > but it still eats one per virtual Router.
>> > What is the correct behavior?
>>
>> Regardless of DVR, a Neutron router burns one IP per virtual router
>> which it uses to SNAT traffic from instances that do not have floating
>> IPs.
>>
>> When you use DVR, an additional IP is consumed for each compute host
>> running an L3 agent in DVR mode.  There has been some discussion about
>> how this can be eliminated but no action has been taken to do this.
>>
>> > Otherwise, it works as a DVR should according to documentation. There
>> are
>