[openstack-dev] [Nova] VMware CI

2015-04-11 Thread Gary Kotton
Hi,
Can a core please take a look at https://review.openstack.org/#/c/171037. The 
CI is broken due to commit e7ae5bb7fbdd5b79bde8937958dd0a645554a5f0.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-11 Thread Kevin Benton
So IIUC tooz would be handling the liveness detection for the agents. That
would be nice to get ride of that logic in Neutron and just register
callbacks for rescheduling the dead.

Where does it store that state, does it persist timestamps to the DB like
Neutron does? If so, how would that scale better? If not, who does a given
node ask to know if an agent is online or offline when making a scheduling
decision?

However, before (what I assume is) the large code change to implement tooz,
I would like to quantify that the heartbeats are actually a bottleneck.
When I was doing some profiling of them on the master branch a few months
ago, processing a heartbeat took an order of magnitude less time (<50ms)
than the 'sync routers' task of the l3 agent (~300ms). A few query
optimizations might buy us a lot more headroom before we have to fall back
to large refactors.
Kevin Benton wrote:

>
> One of the most common is the heartbeat from each agent. However, I
> don't think we can't eliminate them because they are used to determine
> if the agents are still alive for scheduling purposes. Did you have
> something else in mind to determine if an agent is alive?
>

Put each agent in a tooz[1] group; have each agent periodically
heartbeat[2], have whoever needs to schedule read the active members of
that group (or use [3] to get notified via a callback), profit...

Pick from your favorite (supporting) driver at:

http://docs.openstack.org/developer/tooz/compatibility.html

[1] http://docs.openstack.org/developer/tooz/compatibility.html#grouping
[2] https://github.com/openstack/tooz/blob/0.13.1/tooz/coordination.py#L315
[3] http://docs.openstack.org/developer/tooz/tutorial/group_
membership.html#watching-group-changes


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-11 Thread Kevin Benton
The TCP/IP stack keeps track of connections as a combination of IP + TCP
port. The two byte port limit doesn't matter unless all of the agents are
connecting from the same IP address, which shouldn't be the case unless
compute nodes connect to the rabbitmq server via one IP address running
port address translation.

Either way, the agents don't connect directly to the Neutron server, they
connect to the rabbit MQ cluster. Since as many Neutron server processes
can be launched as necessary, the bottlenecks will likely show up at the
messaging or DB layer.

On Sat, Apr 11, 2015 at 6:46 PM, joehuang  wrote:

>  As Kevin talking about agents, I want to remind that in TCP/IP stack,
> port ( not Neutron Port ) is a two bytes field, i.e. port ranges from 0 ~
> 65535, supports maximum 64k port number.
>
>
>
> " above 100k managed node " means more than 100k L2 agents/L3 agents...
> will be alive under Neutron.
>
>
>
> Want to know the detail design how to support 99.9% possibility for
> scaling Neutron in this way, and PoC and test would be a good support for
> this idea.
>
>
>
> "I'm 99.9% sure, for scaling above 100k managed node,
> we do not really need to split the openstack to multiple smaller openstack,
> or use significant number of extra controller machine."
>
>
>
> Best Regards
>
>
>
> Chaoyi Huang ( joehuang )
>
>
>  --
> *From:* Kevin Benton [blak...@gmail.com]
> *Sent:* 11 April 2015 12:34
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] Neutron scaling datapoints?
>
>   Which periodic updates did you have in mind to eliminate? One of the
> few remaining ones I can think of is sync_routers but it would be great if
> you can enumerate the ones you observed because eliminating overhead in
> agents is something I've been working on as well.
>
>  One of the most common is the heartbeat from each agent. However, I
> don't think we can't eliminate them because they are used to determine if
> the agents are still alive for scheduling purposes. Did you have something
> else in mind to determine if an agent is alive?
>
> On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas 
> wrote:
>
>> I'm 99.9% sure, for scaling above 100k managed node,
>> we do not really need to split the openstack to multiple smaller
>> openstack,
>> or use significant number of extra controller machine.
>>
>> The problem is openstack using the right tools SQL/AMQP/(zk),
>> but in a wrong way.
>>
>> For example.:
>> Periodic updates can be avoided almost in all cases
>>
>> The new data can be pushed to the agent just when it needed.
>> The agent can know when the AMQP connection become unreliable (queue or
>> connection loose),
>> and needs to do full sync.
>> https://bugs.launchpad.net/neutron/+bug/1438159
>>
>> Also the agents when gets some notification, they start asking for
>> details via the
>> AMQP -> SQL. Why they do not know it already or get it with the
>> notification ?
>>
>>
>> - Original Message -
>> > From: "Neil Jerram" 
>>  > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> > Sent: Thursday, April 9, 2015 5:01:45 PM
>> > Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
>> >
>> > Hi Joe,
>> >
>> > Many thanks for your reply!
>> >
>> > On 09/04/15 03:34, joehuang wrote:
>> > > Hi, Neil,
>> > >
>> > >  From theoretic, Neutron is like a "broadcast" domain, for example,
>> > >  enforcement of DVR and security group has to touch each regarding
>> host
>> > >  where there is VM of this project resides. Even using SDN
>> controller, the
>> > >  "touch" to regarding host is inevitable. If there are plenty of
>> physical
>> > >  hosts, for example, 10k, inside one Neutron, it's very hard to
>> overcome
>> > >  the "broadcast storm" issue under concurrent operation, that's the
>> > >  bottleneck for scalability of Neutron.
>> >
>> > I think I understand that in general terms - but can you be more
>> > specific about the broadcast storm?  Is there one particular message
>> > exchange that involves broadcasting?  Is it only from the server to
>> > agents, or are there 'broadcasts' in other directions as well?
>> >
>> > (I presume you are talking about control plane messages here, i.e.
>> > between Neutron components.  Is that right?  Obviously there can also be
>> > broadcast storm problems in the data plane - but I don't think that's
>> > what you are talking about here.)
>> >
>> > > We need layered architecture in Neutron to solve the "broadcast
>> domain"
>> > > bottleneck of scalability. The test report from OpenStack cascading
>> shows
>> > > that through layered architecture "Neutron cascading", Neutron can
>> > > supports up to million level ports and 100k level physical hosts. You
>> can
>> > > find the report here:
>> > >
>> http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers
>> >
>

Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-11 Thread joehuang
As Kevin talking about agents, I want to remind that in TCP/IP stack, port ( 
not Neutron Port ) is a two bytes field, i.e. port ranges from 0 ~ 65535, 
supports maximum 64k port number.



" above 100k managed node " means more than 100k L2 agents/L3 agents... will be 
alive under Neutron.



Want to know the detail design how to support 99.9% possibility for scaling 
Neutron in this way, and PoC and test would be a good support for this idea.



"I'm 99.9% sure, for scaling above 100k managed node,
we do not really need to split the openstack to multiple smaller openstack,
or use significant number of extra controller machine."



Best Regards



Chaoyi Huang ( joehuang )




From: Kevin Benton [blak...@gmail.com]
Sent: 11 April 2015 12:34
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?

Which periodic updates did you have in mind to eliminate? One of the few 
remaining ones I can think of is sync_routers but it would be great if you can 
enumerate the ones you observed because eliminating overhead in agents is 
something I've been working on as well.

One of the most common is the heartbeat from each agent. However, I don't think 
we can't eliminate them because they are used to determine if the agents are 
still alive for scheduling purposes. Did you have something else in mind to 
determine if an agent is alive?

On Fri, Apr 10, 2015 at 2:18 AM, Attila Fazekas 
mailto:afaze...@redhat.com>> wrote:
I'm 99.9% sure, for scaling above 100k managed node,
we do not really need to split the openstack to multiple smaller openstack,
or use significant number of extra controller machine.

The problem is openstack using the right tools SQL/AMQP/(zk),
but in a wrong way.

For example.:
Periodic updates can be avoided almost in all cases

The new data can be pushed to the agent just when it needed.
The agent can know when the AMQP connection become unreliable (queue or 
connection loose),
and needs to do full sync.
https://bugs.launchpad.net/neutron/+bug/1438159

Also the agents when gets some notification, they start asking for details via 
the
AMQP -> SQL. Why they do not know it already or get it with the notification ?


- Original Message -
> From: "Neil Jerram" 
> mailto:neil.jer...@metaswitch.com>>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Sent: Thursday, April 9, 2015 5:01:45 PM
> Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
>
> Hi Joe,
>
> Many thanks for your reply!
>
> On 09/04/15 03:34, joehuang wrote:
> > Hi, Neil,
> >
> >  From theoretic, Neutron is like a "broadcast" domain, for example,
> >  enforcement of DVR and security group has to touch each regarding host
> >  where there is VM of this project resides. Even using SDN controller, the
> >  "touch" to regarding host is inevitable. If there are plenty of physical
> >  hosts, for example, 10k, inside one Neutron, it's very hard to overcome
> >  the "broadcast storm" issue under concurrent operation, that's the
> >  bottleneck for scalability of Neutron.
>
> I think I understand that in general terms - but can you be more
> specific about the broadcast storm?  Is there one particular message
> exchange that involves broadcasting?  Is it only from the server to
> agents, or are there 'broadcasts' in other directions as well?
>
> (I presume you are talking about control plane messages here, i.e.
> between Neutron components.  Is that right?  Obviously there can also be
> broadcast storm problems in the data plane - but I don't think that's
> what you are talking about here.)
>
> > We need layered architecture in Neutron to solve the "broadcast domain"
> > bottleneck of scalability. The test report from OpenStack cascading shows
> > that through layered architecture "Neutron cascading", Neutron can
> > supports up to million level ports and 100k level physical hosts. You can
> > find the report here:
> > http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers
>
> Many thanks, I will take a look at this.
>
> > "Neutron cascading" also brings extra benefit: One cascading Neutron can
> > have many cascaded Neutrons, and different cascaded Neutron can leverage
> > different SDN controller, maybe one is ODL, the other one is OpenContrail.
> >
> > Cascading Neutron---
> >  / \
> > --cascaded Neutron--   --cascaded Neutron-
> > |  |
> > -ODL--   OpenContrail
> >
> >
> > And furthermore, if using Neutron cascading in multiple data centers, the
> > DCI controller (Data center inter-connection controller) can also be used
> > under cascading Neutron, to provide NaaS ( network as a service ) across
> > data centers.
> >
> > ---Cascading Neutron--

Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-11 Thread Joshua Harlow

Kevin Benton wrote:


One of the most common is the heartbeat from each agent. However, I
don't think we can't eliminate them because they are used to determine
if the agents are still alive for scheduling purposes. Did you have
something else in mind to determine if an agent is alive?


Put each agent in a tooz[1] group; have each agent periodically 
heartbeat[2], have whoever needs to schedule read the active members of 
that group (or use [3] to get notified via a callback), profit...


Pick from your favorite (supporting) driver at:

http://docs.openstack.org/developer/tooz/compatibility.html

[1] http://docs.openstack.org/developer/tooz/compatibility.html#grouping
[2] https://github.com/openstack/tooz/blob/0.13.1/tooz/coordination.py#L315
[3] 
http://docs.openstack.org/developer/tooz/tutorial/group_membership.html#watching-group-changes



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume driver for Blockbridge EPS backend

2015-04-11 Thread Duncan Thomas
On 11 Apr 2015 02:04, "Jeremy Stanley"  wrote:
>
> On 2015-04-11 01:28:51 +0300 (+0300), Duncan Thomas wrote:
> [...]
> > We will not be merging any drivers without stable 3rd party CI in
> > future
> [...]
>
> For clarity, hopefully this is "stable testing reporting on changes
> to the project" in general, and not "3rd party CI" specifically.

Absolutely - we don't mind if it is hosted by infra or elsewhere - indeed
we recently opened up a route via which infra hosted CI jobs can become
voting. I'll try to remember to point this option out in future, but please
do continue to shout up if I don't!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] volume driver for Blockbridge EPS backend

2015-04-11 Thread Duncan Thomas
On 11 Apr 2015 02:04, "Jeremy Stanley"  wrote:
>
> On 2015-04-11 01:28:51 +0300 (+0300), Duncan Thomas wrote:
> [...]
> > We will not be merging any drivers without stable 3rd party CI in
> > future
> [...]
>
> For clarity, hopefully this is "stable testing reporting on changes
> to the project" in general, and not "3rd party CI" specifically.

Absolutely - we don't mind if it is hosted by infra or elsewhere - indeed
we recently opened up a route via which infra hosted CI jobs can become
voting. I'll try to remember to point this option out in future, but please
do continue to shout up if I don't!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up Doc? Apr 10 2015

2015-04-11 Thread Monty Taylor
Sorry for top posting - I wasn't subscribed to the doc list before
clarkb told me about this thread. Warning ... rage coming ... if you
don't want to read rage on a Saturday, I recommend skipping this email.

a) There may be a doc bug here, but I'm not 100% convinced it's a doc
bug - I'll try to characterize it in this way:

"As a user, I do not know what version of glance I am or should be
interacting with"

That part of this is about the default version that python-glanceclient
may or may not use and what version you may or may not need to provide
on the command line is a badness I'll get to in a second - but a clear
"so you want to upload an image, here's what you need to know" is, I
think, what Bernd was looking for

b) Glance is categorically broken in all regards related to this topic.
This thing is the most painful and most broken of everything that exists
in OpenStack. It is the source of MONTHS of development to deal with it
in Infra, and even the workarounds are terrible.

Let me expand:

glance image-upload MAY OR MAY NOT work on your cloud, and there is
absolutely no way you as a user can tell. You just have to try and find out.

IF glance image-upload does not work for you, it may be because of two
things, neither of which are possible for you as a user to find out:

Either:

- Your cloud has decided to not enable image upload permissions in their
policy.json file, which is a completely opaque choice that you as a user
have no way of finding out. If this is the case you have no recourse, sorry.
- Your cloud has deployed a recent glance and has configured it for
glance v2 and has configured it in the policy.json file to ONLY allow v2
and to disallow image-upload

If the second is true, which you have no way to discover except for
trying, what you need to do is:

- upload the image to swift
- glance task-create --type=import --input='{"import_from":
"$PATH_TO_IMAGE_IN_SWIFT", "image_properties" : {"name": "Human Readable
Image Name"}}'

Yes, you do have to pass JSON on the command line, because BONGHITS (/me
glares at the now absent Brian Waldon with withering disdain for having
inflicted such an absolutely craptastic API on the world.)

Then, you need to poll glance task-status for the status of the
import_from task until your image has imported.

c) The python-glanceclient command line client should encapsulate that
ridiculous logic for you, but it does not

d) It should be possible to discover from the cloud which of the
approaches you should take, but it isn't

Now - I'm honestly not sure how far the docs team should take working
around this - because fully describing how to successfully upload an
image without resorting to calling people names is impossible - but is
it really the Docs team job to make an impossible API seem user
friendly? Or, should we not treat this as a docs bug and instead treat
it as a Glance bug and demand a v3 API that rolls back the task interface?

I vote for the latter.

BTW - the shade library encodes as much of the logic above as it can.
That it exists makes me sad.

Monty

On Sat, Apr 11, 2015 at 10:50 AM, Matt Kassawara 
wrote:

> Sounds like a problem with one or more packages (perhaps
> python-glanceclient?) because that command using the source version (not
> packages) returns the normal list of help items. Maybe try the source
> version using "pip install python-glanceclient"?
>
> On Sat, Apr 11, 2015 at 5:55 AM, Bernd Bausch 
> wrote:
>
>> glance help image-create. Sorry for being vague.
>>
>> When running glance with the parameters from the install guide (the trunk
>> version), I am told that I am not doing it correctly; I don’t have the
>> precise message handy.
>>
>>
>>
>> My fear is that I will hit similar problems later. You solving the
>> problem would be nice but not enough :)
>>
>>
>>
>> *From:* Matt Kassawara [mailto:mkassawara at gmail.com]
>> *Sent:* Saturday, April 11, 2015 1:59 PM
>> *To:* Bernd Bausch
>> *Cc:* openstack-docs at lists.openstack.org
>>
>> *Subject:* Re: [OpenStack-docs] [install-guide] RE: What's Up Doc? Apr
>> 10 2015
>>
>>
>>
>> When you run "glance help image-create" or just "glance image-create"
>> with no arguments?
>>
>>
>>
>> On Fri, Apr 10, 2015 at 11:45 PM, Bernd Bausch 
>> wrote:
>>
>> This is what I get when running glance image-create:
>>
>>
>>
>> usage: glance image-create [--property ]
>> [--file ]
>>
>>
>>[--progress]
>>
>>
>>
>>
>>
>>
>> Create a new image.
>>
>>
>>
>> Positional arguments:
>>
>>Please run with connection
>> parameters set to retrieve
>>
>>the schema for
>> generating help for this command
>>
>>
>>
>> So I wonder how I can get to the bottom of this.
>>
>>
>>
>> *From:* Matt Kassawara [mailto:mkassawara at gmail.com]
>> *Sent:* Saturday, April 11, 2015 1:39 PM
>> *To:* Bernd Bausch; openstack-docs at lists.openstack.org
>> *Subject:* Re: [OpenStack-docs] [ins

Re: [openstack-dev] [infra] request to disable xenserver CI account

2015-04-11 Thread Bob Ball
Sorry all for this breakage; I've been on vacation and didn't set up adequate 
cover.

Thanks for disabling it and we'll let everyone know when the job is ready to 
start commenting and hopefully voting on changes in the very near future.

Regards,

Bob

From: Matthew Treinish [mtrein...@kortar.org]
Sent: 09 April 2015 23:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [infra] request to disable xenserver CI account

On Fri, Apr 10, 2015 at 08:06:01AM +1000, Ian Wienand wrote:
> On 04/10/2015 07:13 AM, Matt Riedemann wrote:
> >The XenServer/XenProject third party CI job has been voting -1 on
> >nova changes for over 24 hours without a response from the
> >maintainers so I'd like to request that we disable for now while
> >it's being worked since it's a voting job and causing noise at kind
> >of a hairy point in the release.
>
> Can we also please do the same for devstack in [1]; it's constantly
> giving negative verified so it makes it hard to go through the review
> list and decide what needs attention.
>
> I will also update the status on [2]
>

I just took care of doing this on both devstack and tempest, so the CI shouldn't
be leaving anymore -1s in reviews anymore.

-Matt Treinish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTL Voting is now open

2015-04-11 Thread Tristan Cacqueray
Attention voters, ballots have been sent to one of your Gerrit
additional E-mail Addresses. Due to this error we must cancel this
election and start a new one. Any vote that has already been cast is
null and void.

The title of the new poll is changed to "New OpenStack ... PTL Election"
and all votes must be re-submitted using the new ballots sent to your
Gerrit Preferred E-mail Address.

The end date is now extended by one day so you can vote until
13:00 UTC April 17, 2015.

Please accept my apologies for any inconvenience this error may have caused.

Tristan

On 04/10/2015 01:01 PM, Tristan Cacqueray wrote:
> Elections are underway and will remain open for you to cast your vote
> until 13:00 UTC April 16, 2015
> 
> We are having elections for Nova and Glance.
> 
> If you are a Foundation individual member and had a commit in one of the
> program's projects[0] over the Juno-Kilo timeframe (April 9, 2014 06:00
> UTC to April 9, 2015 05:59 UTC) then you are eligible to vote. You
> should find your email with a link to the Condorcet page to cast your
> vote in the inbox of your gerrit preferred email[1].
> 
> What to do if you don't see the email and have a commit in at least one
> of the programs having an election:
>   * check the trash or spam folders of your gerrit Preferred Email
> address, in case it went into trash or spam
>   * wait a bit and check again, in case your email server is a bit slow
>   * find the sha of at least one commit from the program project
> repos[0] and email me and Elizabeth[2] at the below email addresses. If
> we can confirm that you are entitled to vote, we will add you to the
> voters list for the appropriate election.
> 
> Our democratic process is important to the health of OpenStack, please
> exercise your right to vote.
> 
> Candidate statements/platforms can be found linked to Candidate names on
> this page:
> https://wiki.openstack.org/wiki/PTL_Elections_April_2015#Confirmed_candidates:
> 
> Happy voting,
> Tristan (tristanC)
> 
> [0] The list of the program projects eligible for electoral status:
> https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=april-2015-elections
> 
> [1] Sign into review.openstack.org:
> Go to Settings > Contact Information.
> Look at the email listed as your Preferred Email.
> That is where the ballot has been sent.
> 
> [2] Elizabeth's email: lyz at princessleia dot com
> Tristan's email: tristan dot cacqueray at enovance dot com
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TripleO: CI down... SSL cert expired

2015-04-11 Thread Dan Prince
Looks like our SSL certificate has expired for the currently active CI
cloud. We are working on getting a new one generated and installed.
Until then CI jobs won't get processed.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multi Region Designate

2015-04-11 Thread Anik
Hi Kiall,
Thanks for getting back. 
Yes, I understand that Designate is providing the API interface to push data 
into a DNS namespace, so not really related to the region concept in the same 
way as nova or most other OpenStack services. 
I think the problem I am highlighting here is that of making updates for zone 
data to an authoritative DNS server from distributed sources. 
Take the example of a company which has deployed their resources across 
multiple OpenStack regions. Now they just want to have a flat DNS namespace 
(say example.com). They will need to enter host - IP mapping data to some 
authoritative back end DNS server for this purpose. The records for this zone 
are being created from multiple regions either through static data entry 
through designate or via notification handlers from OpenStack events like FIP 
creation.
So my question is can we view designate simply as an data entry vehicle with an 
API front end where a single (centralized) backend DNS server can be fed data 
from multiple designate instances ? That way different designate instances in 
different regions can generate their local RRs for a zone (example.com) and 
point to the same backend DNS for populating the zone file. Once data goes into 
the centralized backend DNS, it will be the responsibility of the DNS 
infrastructure to serve the DNS data in a distributed scale out manner globally 
for lookups.
The problem that will still need to be solved is how do you create major DNS 
entries, like creating a new zone, with this approach ? I will try to describe 
a solution for that in a follow up email but wanted to get your opinion first 
on what I am describing here so far. 
Regards, Anik 

--

Message: 1
Date: Tue, 07 Apr 2015 13:00:33 +0100
From: Kiall Mac Innes 
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Multi Region Designate
Message-ID: <5523c6e1.9090...@macinnes.ie>
Content-Type: text/plain; charset=windows-1252

Hey Anik,

So, unlike Nova or other services which really are "region" aware,
Designate, being designed to push data into the global DNS namespace,
doesn't have the same concept of regions.

Typically, you will either have regions which are "close enough" to run
a Galera/Percona cluster across them without adding too much latency, or
you will run asynchronous replication from one region to another using
an Active/Standby failover for the core DB.

The DNS team @ HP has discussed possible improvements to this many times
over the last year or so, but haven't come up with any great solutions
to providing what amounts to a global service is a per-region way. We're
certainly open to suggestions! :)

Thanks,
Kiall

On 23/03/15 04:41, Anik wrote:
> Hi,
> 
> Are there any plans to have multi region DNS service through designate ?
> 
> For example If a tenant has projects in multiple regions and wants to
> use a single (flat) external domain name space for floating IPs, what is
> the proposed solution for such a use case using Designate ?
>  
> Regards,
> Anik
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev