[Openstack-operators] [neutron] [QoS] Interface/API for review.

2015-05-04 Thread Miguel Ángel Ajo

Hello, we're working [1] on the QoS API definition and we’d like to get some 
feedback from
operators on [2] and [3].

We’re scoping the work we intend to do in Liberty to make sure it will be 
doable, but we plan
to extend it via subsequent iterations:
* traffic marking (dscp, VLAN 802.1p, ipv6 flow labels..)
* L3/L4 protocol matching of rules (i.e UDP port=5060.., TCP port=80)
* bandwidth guarantees in conjunction with nova scheduling
* congestion notification…


Best regards,
Miguel Ángel Ajo

[1] https://wiki.openstack.org/wiki/Meetings/QoS
[2] https://etherpad.openstack.org/p/neutron-qos-api-preview
[3] https://review.openstack.org/#/c/88599


Miguel Ángel Ajo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] [QoS] Interface/API for review.

2015-05-04 Thread Jesse Keating
Thanks Miguel!

The command line reference set looks good, although I'm curious what the
openstackclient version of them will be.  I've also left a comment on the
spec review.


- jlk

On Mon, May 4, 2015 at 7:29 AM, Miguel Ángel Ajo 
wrote:

>
> Hello, we're working [1] on the QoS API definition and we’d like to get
> some feedback from
> operators on [2] and [3].
>
> We’re scoping the work we intend to do in Liberty to make sure it will be
> doable, but we plan
> to extend it via subsequent iterations:
> * traffic marking (dscp, VLAN 802.1p, ipv6 flow labels..)
> * L3/L4 protocol matching of rules (i.e UDP port=5060.., TCP port=80)
> * bandwidth guarantees in conjunction with nova scheduling
> * congestion notification…
>
>
> Best regards,
> Miguel Ángel Ajo
>
> [1] https://wiki.openstack.org/wiki/Meetings/QoS
> [2] https://etherpad.openstack.org/p/neutron-qos-api-preview
> [3] https://review.openstack.org/#/c/88599
>
>
> Miguel Ángel Ajo
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] expanding to 2nd location

2015-05-04 Thread Jonathan Proulx
Hi All,

We're about to expand our OpenStack Cloud to a second datacenter.
Anyone one have opinions they'd like to share as to what I would and
should be worrying about or how to structure this?  Should I be
thinking cells or regions (or maybe both)?  Any obvious or not so
obvious pitfalls I should try to avoid?

Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
using Ceph for volume storage, ephemeral block devices, and image
storage (as well as object store).  Bulk data storage for most (but by
no means all) of our workloads is at the current location (not that
that matters I suppose).

Second location is about 150km away and we'll have 10G (at least)
between sites. The expansion will be approximately the same size as
the existing cloud maybe slightly larger and given site capacities the
new location is also more likely to be where any future grown goes.

Thanks,
-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] expanding to 2nd location

2015-05-04 Thread Allamaraju, Subbu
I suggest building a new AZ (“region” in OpenStack parlance) in the new 
location. In general I would avoid setting up control plane to operate across 
multiple facilities unless the cloud is very large.

> On May 4, 2015, at 1:40 PM, Jonathan Proulx  wrote:
> 
> Hi All,
> 
> We're about to expand our OpenStack Cloud to a second datacenter.
> Anyone one have opinions they'd like to share as to what I would and
> should be worrying about or how to structure this?  Should I be
> thinking cells or regions (or maybe both)?  Any obvious or not so
> obvious pitfalls I should try to avoid?
> 
> Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
> using Ceph for volume storage, ephemeral block devices, and image
> storage (as well as object store).  Bulk data storage for most (but by
> no means all) of our workloads is at the current location (not that
> that matters I suppose).
> 
> Second location is about 150km away and we'll have 10G (at least)
> between sites. The expansion will be approximately the same size as
> the existing cloud maybe slightly larger and given site capacities the
> new location is also more likely to be where any future grown goes.
> 
> Thanks,
> -Jon
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] expanding to 2nd location

2015-05-04 Thread Jesse Keating
I agree with Subbu. You'll want that to be a region so that the control
plane is mostly contained. Only Keystone (and swift if you have that) would
be doing lots of site to site communication to keep databases in sync.

http://docs.openstack.org/arch-design/content/multi_site.html is a good
read on the topic.


- jlk

On Mon, May 4, 2015 at 1:58 PM, Allamaraju, Subbu  wrote:

> I suggest building a new AZ (“region” in OpenStack parlance) in the new
> location. In general I would avoid setting up control plane to operate
> across multiple facilities unless the cloud is very large.
>
> > On May 4, 2015, at 1:40 PM, Jonathan Proulx  wrote:
> >
> > Hi All,
> >
> > We're about to expand our OpenStack Cloud to a second datacenter.
> > Anyone one have opinions they'd like to share as to what I would and
> > should be worrying about or how to structure this?  Should I be
> > thinking cells or regions (or maybe both)?  Any obvious or not so
> > obvious pitfalls I should try to avoid?
> >
> > Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
> > using Ceph for volume storage, ephemeral block devices, and image
> > storage (as well as object store).  Bulk data storage for most (but by
> > no means all) of our workloads is at the current location (not that
> > that matters I suppose).
> >
> > Second location is about 150km away and we'll have 10G (at least)
> > between sites. The expansion will be approximately the same size as
> > the existing cloud maybe slightly larger and given site capacities the
> > new location is also more likely to be where any future grown goes.
> >
> > Thanks,
> > -Jon
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] expanding to 2nd location

2015-05-04 Thread Joe Topjian
Hi Jon,

We're about to expand our OpenStack Cloud to a second datacenter.
>

Congratulations! :)


> Anyone one have opinions they'd like to share as to what I would and
> should be worrying about or how to structure this?


What services will be shared between the two locations? Keystone with db
replication is usually quite easy and Glance with some type of file sync is
also easy.

Also think about network connectivity. Will the new location have a local
gateway to the internet? Or will all traffic come back to the original
location in order to get out? That's outside of OpenStack and more of a
general network/sysadmin thing, but it will determine how you handle
OpenStack outages when a network outage happens.


> Should I be thinking cells or regions (or maybe both)?  Any obvious or not
> so
> obvious pitfalls I should try to avoid?
>

I have never used Cells, but that's mostly due to being able to accomplish
everything with Regions. Check out some of my posts over the last year on
the regular OpenStack list about Regions.

Also think about how you'll handle Quotas. Do you want each user to have a
separate quota for each side? Or share a quota? I'm not aware of a
"supported" way by OpenStack or a side project that does the latter. We've
been doing this ourselves for several years using out of bound scripts.


> Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
> using Ceph for volume storage, ephemeral block devices, and image
> storage (as well as object store).  Bulk data storage for most (but by
> no means all) of our workloads is at the current location (not that
> that matters I suppose).
>
> Second location is about 150km away and we'll have 10G (at least)
> between sites.


For one of our clouds, the two regions are 300km apart on a 10G connection.
We're seeing approximately 3.7ms ping times.

Some short notes:

* Galera replication works well -- we don't see any noticeable lag.

* We replicate Glance images by a simple rsync script.

* We have one site designated as "master" and that's where the main DNS
name points. Each site has a separate DNS name so you could access each one
using a specific URL (cloud.example.com, site1.cloud.example.com,
site2.cloud.example.com). By accessing any of them, once logged into the
dashboard you can access the opposite through Horizon.

Hope that helps... let me know if you have any questions on any of the
above.

Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] expanding to 2nd location

2015-05-04 Thread Tom Fifield



On 05/05/15 04:40, Jonathan Proulx wrote:

Hi All,

We're about to expand our OpenStack Cloud to a second datacenter.
Anyone one have opinions they'd like to share as to what I would and
should be worrying about or how to structure this?  Should I be
thinking cells or regions (or maybe both)?  Any obvious or not so
obvious pitfalls I should try to avoid?

Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
using Ceph for volume storage, ephemeral block devices, and image
storage (as well as object store).  Bulk data storage for most (but by
no means all) of our workloads is at the current location (not that
that matters I suppose).

Second location is about 150km away and we'll have 10G (at least)
between sites. The expansion will be approximately the same size as
the existing cloud maybe slightly larger and given site capacities the
new location is also more likely to be where any future grown goes.



Do you need users to be able to see it as one cloud, with a single API 
endpoint?


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] expanding to 2nd location

2015-05-04 Thread Tim Bell

CERN runs two data centres in Geneva (3.5MW) and Budapest (2.7MW), around
1,200 KMs . We have two 100Gb/s links between the two sites and latency of
around 22ms. 

We run this as a single cloud with 13 cells. Each cell is only in one data
centre.

We wanted a single API endpoint from the user perspective and thus, we did
not use regions.

There are things to consider such as

- Availability zone set up so that people can choose which centre to place
work in (such as disaster recovery)
- Scheduling of work for projects and localisation of the volumes for
those Vms (we¹ve not found a good solution for this one)

In an ideal world, we¹d have a high availability API layer for the cells
across two sites. We¹ve not got that far yet.

Tim

On 5/5/15, 3:42 AM, "Tom Fifield"  wrote:

>
>
>On 05/05/15 04:40, Jonathan Proulx wrote:
>> Hi All,
>>
>> We're about to expand our OpenStack Cloud to a second datacenter.
>> Anyone one have opinions they'd like to share as to what I would and
>> should be worrying about or how to structure this?  Should I be
>> thinking cells or regions (or maybe both)?  Any obvious or not so
>> obvious pitfalls I should try to avoid?
>>
>> Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
>> using Ceph for volume storage, ephemeral block devices, and image
>> storage (as well as object store).  Bulk data storage for most (but by
>> no means all) of our workloads is at the current location (not that
>> that matters I suppose).
>>
>> Second location is about 150km away and we'll have 10G (at least)
>> between sites. The expansion will be approximately the same size as
>> the existing cloud maybe slightly larger and given site capacities the
>> new location is also more likely to be where any future grown goes.
>
>
>Do you need users to be able to see it as one cloud, with a single API
>endpoint?
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators