Re: [Openstack-operators] New project creation fails because of a Nova check in a multi-region cloud

2018-05-10 Thread Jean-Philippe Méthot


> Le 11 mai 2018 à 08:36, Matt Riedemann  a écrit :
> 
> On 5/10/2018 6:30 PM, Jean-Philippe Méthot wrote:
>> 1.I was talking about the region-name parameter underneath 
>> keystone_authtoken. That is in the pike doc you linked, but I am unaware if 
>> this is only used for token generation or not. Anyhow, it doesn’t seem to 
>> have any impact on the issue at hand.
> 
> The [keystone]/region_name config option in nova is used to pike the identity 
> service endpoint so I think in that case region_one will matter if there are 
> multiple identity endpoints in the service catalog. The only thing is you're 
> on pike where [keystone]/region_name isn't in nova.conf and it's not used, it 
> was added in queens for this lookup:
> 
> https://review.openstack.org/#/c/507693/
> 
> So that might be why it doesn't seem to make a difference if you set it in 
> nova.conf - because the nova code isn't actually using it.
> 

I was talking about the parameter under [keystone_authtoken] 
([keystone_authtoken]/region_name) and not the new one under [keystone] 
([keystone]/region_name). It seems that we were talking about different 
parameters though so this explains that. 


> You could try backporting that patch into your pike deployment, set 
> region_name to RegionOne and see if it makes a difference (although I thought 
> RegionOne was the default if not specified?).

I will attempt this next week. Will update if I run into any issues. Also, from 
experience, most Openstack services seem to pick a random endpoint when 
region_name isn’t specified in a multi-region cloud. I’ve seen that several 
times ever since I've built and started maintaining this infrastructure.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] New project creation fails because of a Nova check in a multi-region cloud

2018-05-10 Thread Matt Riedemann

On 5/10/2018 6:30 PM, Jean-Philippe Méthot wrote:
1.I was talking about the region-name parameter underneath 
keystone_authtoken. That is in the pike doc you linked, but I am unaware 
if this is only used for token generation or not. Anyhow, it doesn’t 
seem to have any impact on the issue at hand.


The [keystone]/region_name config option in nova is used to pike the 
identity service endpoint so I think in that case region_one will matter 
if there are multiple identity endpoints in the service catalog. The 
only thing is you're on pike where [keystone]/region_name isn't in 
nova.conf and it's not used, it was added in queens for this lookup:


https://review.openstack.org/#/c/507693/

So that might be why it doesn't seem to make a difference if you set it 
in nova.conf - because the nova code isn't actually using it.


You could try backporting that patch into your pike deployment, set 
region_name to RegionOne and see if it makes a difference (although I 
thought RegionOne was the default if not specified?).


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] New project creation fails because of a Nova check in a multi-region cloud

2018-05-10 Thread Jean-Philippe Méthot



>> 
>>> I currently operate a multi-region cloud split between 2 geographic
>>> locations. I have updated it to Pike not too long ago, but I've been
>>> running into a peculiar issue. Ever since the Pike release, Nova now
>>> asks Keystone if a new project exists in Keystone before configuring
>>> the project’s quotas. However, there doesn’t seem to be any region
>>> restriction regarding which endpoint Nova will query Keystone on. So,
>>> right now, if I create a new project in region one, Nova will query
>>> Keystone in region two. Because my keystone databases are not synched
>>> in real time between each region, the region two Keystone will tell
>>> it that the new project doesn't exist, while it exists in region one
>>> Keystone.
> Are both keystone nodes completely separate? Do they share any information?

I share the DB information between both. In our use case, we very rarely make 
changes to keystone (password change, user creation, project creation) and 
there is a limited number of people who even have access to it, so I can get 
away with having my main DB in region 1 and hosting an exact copy in region 2. 
The original idea was to have a mysql slave in region 2, but that failed and we 
decided to go with manually replicating the keystone DB whenever we would make 
changes. This means I have the same users and projects in both regions, which 
is exactly what I want right now for my specific use case. Of course, that also 
means I only do operations in keystone in Region 1 and never from Region 2 to 
prevent discrepancies.
>>> 
>>> Thinking that this could be a configuration error, I tried setting
>>> the region_name in keystone_authtoken, but that didn’t change much of
>>> anything. Right now I am thinking this may be a bug. Could someone
>>> confirm that this is indeed a bug and not a configuration error?
>>> 
>>> To circumvent this issue, I am considering either modifying the
>>> database by hand or trying to implement realtime replication between
>>> both Keystone databases. Would there be another solution? (beside
>>> modifying the code for the Nova check)
> A variant of this just came up as a proposal for the Forum in a couple
> weeks [0]. A separate proposal was also discussed during this week's
> keystone meeting [1], which brought up an interesting solution. We
> should be seeing a specification soon that covers the proposal in
> greater detail and includes use cases. Either way, both sound like they
> may be relevant to you.
> 
> [0] https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming 
> 
> [1]
> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-156
>  
> 

This is interesting. Unfortunately I will not be in Vancouver, but I will keep 
an eye on it in the future. I will need to find a way to solve the current 
issue at hand shortly though.

>> 
>> This is the specific code you're talking about:
>> 
>> https://github.com/openstack/nova/blob/stable/pike/nova/api/openstack/identity.py#L35
>> 
>> 
>> I don't see region_name as a config option for talking to keystone in
>> Pike:
>> 
>> https://docs.openstack.org/nova/pike/configuration/config.html#keystone
>> 
>> But it is in Queens:
>> 
>> https://docs.openstack.org/nova/queens/configuration/config.html#keystone
>> 
>> That was added in this change:
>> 
>> https://review.openstack.org/#/c/507693/
>> 
>> But I think what you're saying is, since you have multiple regions,
>> the project could be in any of them at any given time until they
>> synchronize so configuring nova for a specific region isn't probably
>> going to help in this case, right?
>> 
>> Isn't this somehow resolved with keystone federation? Granted, I'm not
>> at all a keystone person, but I'd think this isn't a unique problem.
> Without knowing a whole lot about the current setup, I'm inclined to say
> it is. Keystone-to-keystone federation was developed for cases like
> this, and it's been something we've been trying to encourage in favor of
> building replication tooling outside of the database or over an API. The
> main concerns with taking a manual replication approach is that it could
> negatively impact overall performance and that keystone already assumes
> it will be in control of ID generation for most cases (replicating a
> project in RegionOne into RegionTwo will yield a different project ID,
> even though it is possible for both to have the same name).
> Additionally, there are some things that keystone doesn't expose over
> the API that would need to be replicated, like revocation events (I
> mentioned this in the etherpad linked above).

To answer the questions of both posts:

1.I was talking about the region-name parameter underneath keystone_authtoken. 
That is in the pike doc you linked, but I am unaware if this is only used for 
token 

Re: [Openstack-operators] Octavia on ocata centos 7

2018-05-10 Thread Ignazio Cassano
Many thanks for your help.
Ignazio

Il Gio 10 Mag 2018 21:05 iain MacDonnell  ha
scritto:

>
>
> On 05/10/2018 10:45 AM, Ignazio Cassano wrote:
> > I am moving from lbaas v2 based on haproxy driver to octavia on centos 7
> > ocata.
> [snip]
> > On the octavia server all services are active, amphora images are
> > installed, but when I try to create a load balancer:
> >
> > nuutron lbaas-loadbalancer-create --name lb1 private-subnet
> >
> > it tries to connect to 127.0.0.1:5000
>
> Google found:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1434904 =>
> https://bugzilla.redhat.com/show_bug.cgi?id=1433728
>
> Seems that you may be missing the service_auth section from
> neutron_lbaas.conf or/and octavia.conf ?
>
> I've been through the frustration of trying to get Octavia working. The
> docs are bit iffy, and it's ... "still maturing" (from my observation).
>
> I think I did have it working with neutron_lbaasv2 at one point. My
> neutron_lbaas.conf included:
>
> [service_auth]
> auth_url = http://mykeystonehost:35357/v3
> admin_user = neutron
> admin_tenant_name = service
> admin_password = n0ttell1nU
> admin_user_domain = default
> admin_project_domain = default
> region = myregion
>
> and octavia.conf:
>
> [service_auth]
> memcached_servers = mymemcachedhost:11211
> auth_url = http://mykeystonehost:35357
> auth_type = password
> project_domain_name = default
> project_name = service
> user_domain_name = default
> username = octavia
> password = n0ttell1nU
>
>
> Not sure how correct those are, but IIRC it did basically work.
>
> I've since moved to pure Octavia on Queens, where there is no
> neutron_lbaas.
>
> GL!
>
>  ~iain
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Octavia on ocata centos 7

2018-05-10 Thread iain MacDonnell



On 05/10/2018 10:45 AM, Ignazio Cassano wrote:
I am moving from lbaas v2 based on haproxy driver to octavia on centos 7 
ocata.

[snip]
On the octavia server all services are active, amphora images are 
installed, but when I try to create a load balancer:


nuutron lbaas-loadbalancer-create --name lb1 private-subnet

it tries to connect to 127.0.0.1:5000 


Google found:

https://bugzilla.redhat.com/show_bug.cgi?id=1434904 => 
https://bugzilla.redhat.com/show_bug.cgi?id=1433728


Seems that you may be missing the service_auth section from 
neutron_lbaas.conf or/and octavia.conf ?


I've been through the frustration of trying to get Octavia working. The 
docs are bit iffy, and it's ... "still maturing" (from my observation).


I think I did have it working with neutron_lbaasv2 at one point. My 
neutron_lbaas.conf included:


[service_auth]
auth_url = http://mykeystonehost:35357/v3
admin_user = neutron
admin_tenant_name = service
admin_password = n0ttell1nU
admin_user_domain = default
admin_project_domain = default
region = myregion

and octavia.conf:

[service_auth]
memcached_servers = mymemcachedhost:11211
auth_url = http://mykeystonehost:35357
auth_type = password
project_domain_name = default
project_name = service
user_domain_name = default
username = octavia
password = n0ttell1nU


Not sure how correct those are, but IIRC it did basically work.

I've since moved to pure Octavia on Queens, where there is no neutron_lbaas.

GL!

~iain




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Octavia on ocata centos 7

2018-05-10 Thread Ignazio Cassano
Hi everyone,
I am moving from lbaas v2 based on haproxy driver to octavia on centos 7
ocata.

I installed a new host with octavia following the documentation.
I removed all old load balancers, stopped lbaas agent and configured
neutron following this link:

https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html


On the octavia server all services are active, amphora images are
installed, but when I try to create a load balancer:

nuutron lbaas-loadbalancer-create --name lb1 private-subnet

it tries to connect to 127.0.0.1:5000

Either on octavia.conf or neutron.conf the section for keystone is
correctly configured

to reach controller address.

The old lbaas v2 based on haproxy driver worked fine before changing
configuration but

is was not possible protect lbaas adresses with security groups (this
is a very old problem) because security groups are applyed only to vm
ports.

Since Octavia load balancer is based on vm deirved from amphora image,
I'd like to use it to improve my security.

Any suggestion for my octavia configuration or alternatives to improve
security on lbaas ?

Thanks and Regards

Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] New project creation fails because of a Nova check in a multi-region cloud

2018-05-10 Thread Lance Bragstad


On 05/10/2018 08:52 AM, Matt Riedemann wrote:
> On 5/9/2018 8:11 PM, Jean-Philippe Méthot wrote:
>> I currently operate a multi-region cloud split between 2 geographic
>> locations. I have updated it to Pike not too long ago, but I've been
>> running into a peculiar issue. Ever since the Pike release, Nova now
>> asks Keystone if a new project exists in Keystone before configuring
>> the project’s quotas. However, there doesn’t seem to be any region
>> restriction regarding which endpoint Nova will query Keystone on. So,
>> right now, if I create a new project in region one, Nova will query
>> Keystone in region two. Because my keystone databases are not synched
>> in real time between each region, the region two Keystone will tell
>> it that the new project doesn't exist, while it exists in region one
>> Keystone.
Are both keystone nodes completely separate? Do they share any information?
>>
>> Thinking that this could be a configuration error, I tried setting
>> the region_name in keystone_authtoken, but that didn’t change much of
>> anything. Right now I am thinking this may be a bug. Could someone
>> confirm that this is indeed a bug and not a configuration error?
>>
>> To circumvent this issue, I am considering either modifying the
>> database by hand or trying to implement realtime replication between
>> both Keystone databases. Would there be another solution? (beside
>> modifying the code for the Nova check)
A variant of this just came up as a proposal for the Forum in a couple
weeks [0]. A separate proposal was also discussed during this week's
keystone meeting [1], which brought up an interesting solution. We
should be seeing a specification soon that covers the proposal in
greater detail and includes use cases. Either way, both sound like they
may be relevant to you.

[0] https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming
[1]
http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-156
>
> This is the specific code you're talking about:
>
> https://github.com/openstack/nova/blob/stable/pike/nova/api/openstack/identity.py#L35
>
>
> I don't see region_name as a config option for talking to keystone in
> Pike:
>
> https://docs.openstack.org/nova/pike/configuration/config.html#keystone
>
> But it is in Queens:
>
> https://docs.openstack.org/nova/queens/configuration/config.html#keystone
>
> That was added in this change:
>
> https://review.openstack.org/#/c/507693/
>
> But I think what you're saying is, since you have multiple regions,
> the project could be in any of them at any given time until they
> synchronize so configuring nova for a specific region isn't probably
> going to help in this case, right?
>
> Isn't this somehow resolved with keystone federation? Granted, I'm not
> at all a keystone person, but I'd think this isn't a unique problem.
Without knowing a whole lot about the current setup, I'm inclined to say
it is. Keystone-to-keystone federation was developed for cases like
this, and it's been something we've been trying to encourage in favor of
building replication tooling outside of the database or over an API. The
main concerns with taking a manual replication approach is that it could
negatively impact overall performance and that keystone already assumes
it will be in control of ID generation for most cases (replicating a
project in RegionOne into RegionTwo will yield a different project ID,
even though it is possible for both to have the same name).
Additionally, there are some things that keystone doesn't expose over
the API that would need to be replicated, like revocation events (I
mentioned this in the etherpad linked above).




signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Need feedback for nova aborting cold migration function

2018-05-10 Thread Matt Riedemann

On 5/9/2018 9:33 PM, saga...@nttdata.co.jp wrote:

We always do the maintenance work on midnight during limited time-slot to 
minimize impact to our users.


Also, why are you doing maintenance with cold migration? Why not do live 
migration for your maintenance (which already supports the abort function).


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [forum] Etherpad for "Ops/Devs: One community" session

2018-05-10 Thread Thierry Carrez
Hi!

I have created an etherpad for the "Ops/Devs: One community" Forum
session that will happen in Vancouver on Monday at 4:20pm.

https://etherpad.openstack.org/p/YVR-ops-devs-one-community

If you are interested in continuing breaking up the community silos and
making everyone "contributors" with various backgrounds but a single
objective, please add to it and join the session !

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] New project creation fails because of a Nova check in a multi-region cloud

2018-05-10 Thread Matt Riedemann

On 5/9/2018 8:11 PM, Jean-Philippe Méthot wrote:
I currently operate a multi-region cloud split between 2 geographic 
locations. I have updated it to Pike not too long ago, but I've been 
running into a peculiar issue. Ever since the Pike release, Nova now 
asks Keystone if a new project exists in Keystone before configuring the 
project’s quotas. However, there doesn’t seem to be any region 
restriction regarding which endpoint Nova will query Keystone on. So, 
right now, if I create a new project in region one, Nova will query 
Keystone in region two. Because my keystone databases are not synched in 
real time between each region, the region two Keystone will tell it that 
the new project doesn't exist, while it exists in region one Keystone.


Thinking that this could be a configuration error, I tried setting the 
region_name in keystone_authtoken, but that didn’t change much of 
anything. Right now I am thinking this may be a bug. Could someone 
confirm that this is indeed a bug and not a configuration error?


To circumvent this issue, I am considering either modifying the database 
by hand or trying to implement realtime replication between both 
Keystone databases. Would there be another solution? (beside modifying 
the code for the Nova check)


This is the specific code you're talking about:

https://github.com/openstack/nova/blob/stable/pike/nova/api/openstack/identity.py#L35

I don't see region_name as a config option for talking to keystone in Pike:

https://docs.openstack.org/nova/pike/configuration/config.html#keystone

But it is in Queens:

https://docs.openstack.org/nova/queens/configuration/config.html#keystone

That was added in this change:

https://review.openstack.org/#/c/507693/

But I think what you're saying is, since you have multiple regions, the 
project could be in any of them at any given time until they synchronize 
so configuring nova for a specific region isn't probably going to help 
in this case, right?


Isn't this somehow resolved with keystone federation? Granted, I'm not 
at all a keystone person, but I'd think this isn't a unique problem.


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Need feedback for nova aborting cold migration function

2018-05-10 Thread Takashi Natsume

Flint and Yukinori, Thank you for your replies!

On 2018/05/10 11:33, saga...@nttdata.co.jp wrote:

Hi Takashi, and guys,

We are operating large telco enterprise cloud.

We always do the maintenance work on midnight during limited time-slot to 
minimize impact to our users.

Operation planning of cold migration is difficult because cold migration time 
will vary drastically as it also depends on the load on storage servers at that 
point of time. If cold migration task stalls for any unknown reasons, operators 
may decide to cancel it manually. This requires several manual steps to be 
carried out for recovering from such situation such as kill the copy process, 
reset-state, stop, and start the VM. If we have the ability to cancel cold 
migration, we can resume our service safely even though the migration is not 
complete in the stipulated maintenance time window.

As of today, we can solve the above issue by following manual procedure to 
recover instances from cold migration failure but we still need to follow these 
steps every time. We can build our own tool to automate this process but we 
will need to maintain it by ourselves as this feature is not supported by any 
OpenStack distribution.

If Nova supports function to cancel cold migration, it’s definitely going to 
help us to bring instances back from cold migration failure thus improving 
service availability to our end users. Secondly, we don’t need to worry about 
maintaining procedure manual or proprietary tool by ourselves which will be a 
huge win for us.

We are definitely interested in this function and we would love to see it in 
the next coming release.

Thank you for your hard work.

--
Yukinori Sagara 
Platform Engineering Department, NTT DATA Corp.


Hi everyone,

I'm going to add the aborting cold migration function [1] in nova.
I would like to ask operators' feedback on this.

The cold migration is an administrator operation by default.
If administrators perform cold migration and it is stalled out,
users cannot do their operations (e.g. starting the VM).

In that case, if administrators can abort the cold migration by using
this function,
it enables users to operate their VMs.

If you are a person like the following, would you reply to this mail?

* Those who need this function
* Those who will use this function if it is implemented
* Those who think that it is better to have this function
* Those who are interested in this function

[1] https://review.openstack.org/#/c/334732/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.takashi at lab.ntt.co.jp


Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev][heat][all] Heat now migrated to StoryBoard!!

2018-05-10 Thread Rico Lin
Hi all,
As we keep adding more info to the migration guideline [1], you might like
to take a look again.
And do hope it will make things easier for you. If not, please find me in
irc or mail.

[1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info

Here's the quick hint for you, your bug id is exactly your story id.

2018-05-07 18:27 GMT+08:00 Rico Lin :

> Hi all,
>
> I updated more information to this guideline in [1].
> Please must take a view on [1] to see what's been updated.
> will likely to keep update on that etherpad if new Q or issue found.
>
> Will keep trying to make this process as painless for you as possible,
> so please endure with us for now, and sorry for any inconvenience
>
> *[1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info
> *
>
> 2018-05-05 12:15 GMT+08:00 Rico Lin :
>
>> looping heat-dashboard team
>>
>> 2018-05-05 12:02 GMT+08:00 Rico Lin :
>>
>>> Dear all Heat members and friends
>>>
>>> As you might award, OpenStack projects are scheduled to migrating ([5])
>>> from Launchpad to StoryBoard [1].
>>> For whom who like to know where to file a bug/blueprint, here are some
>>> heads up for you.
>>>
>>> *What's StoryBoard?*
>>> StoryBoard is a cross-project task-tracker, contains numbers of
>>> ``project``, each project contains numbers of ``story`` which you can think
>>> it as an issue or blueprint. Within each story, contains one or multiple
>>> ``task`` (task separate stories into the tasks to resolve/implement). To
>>> learn more about StoryBoard or how to make a good story, you can reference
>>> [6].
>>>
>>> *How to file a bug?*
>>> This is actually simple, use your current ubuntu-one id to access to
>>> storyboard. Then find the corresponding project in [2] and create a story
>>> to it with a description of your issue. We should try to create tasks which
>>> to reference with patches in Gerrit.
>>>
>>> *How to work on a spec (blueprint)?*
>>> File a story like you used to file a Blueprint. Create tasks for your
>>> plan. Also you might want to create a task for adding spec( in heat-spec
>>> repo) if your blueprint needs documents to explain.
>>> I still leave current blueprint page open, so if you like to create a
>>> story from BP, you can still get information. Right now we will start work
>>> as task-driven workflow, so BPs should act no big difference with a bug in
>>> StoryBoard (which is a story with many tasks).
>>>
>>> *Where should I put my story?*
>>> We migrate all heat sub-projects to StoryBoard to try to keep the impact
>>> to whatever you're doing as small as possible. However, if you plan to
>>> create a new story, *please create it under heat project [4]* and tag
>>> it with what it might affect with (like python-heatclint, heat-dashboard,
>>> heat-agents). We do hope to let users focus their stories in one place so
>>> all stories will get better attention and project maintainers don't need to
>>> go around separate places to find it.
>>>
>>> *How to connect from Gerrit to StoryBoard?*
>>> We usually use following key to reference Launchpad
>>> Closes-Bug: ###
>>> Partial-Bug: ###
>>> Related-Bug: ###
>>>
>>> Now in StoryBoard, you can use following key.
>>> Task: ##
>>> Story: ##
>>> you can find more info in [3].
>>>
>>> *What I need to do for my exists bug/bps?*
>>> Your bug is automatically migrated to StoryBoard, however, the reference
>>> in your patches ware not, so you need to change your commit message to
>>> replace the old link to launchpad to new links to StoryBoard.
>>>
>>> *Do we still need Launchpad after all this migration are done?*
>>> As the plan, we won't need Launchpad for heat anymore once we have done
>>> with migrating. Will forbid new bugs/bps filed in Launchpad. Also, try to
>>> provide new information as many as possible. Hopefully, we can make
>>> everyone happy. For those newly created bugs during/after migration, don't
>>> worry we will disallow further create new bugs/bps and do a second migrate
>>> so we won't missed yours.
>>>
>>> [1] https://storyboard.openstack.org/
>>> [2] https://storyboard.openstack.org/#!/project_group/82
>>> [3] https://docs.openstack.org/infra/manual/developers.html#
>>> development-workflow
>>> [4] https://storyboard.openstack.org/#!/project/989
>>> [5] https://docs.openstack.org/infra/storyboard/migration.html
>>> [6] https://docs.openstack.org/infra/storyboard/gui/tasks_st
>>> ories_tags.html#what-is-a-story
>>>
>>>
>>>
>>> --
>>> May The Force of OpenStack Be With You,
>>>
>>> *Rico Lin*irc: ricolin
>>>
>>>
>>
>>
>> --
>> May The Force of OpenStack Be With You,
>>
>> *Rico Lin*irc: ricolin
>>
>>
>
>
> --
> May The Force of OpenStack Be With You,
>
> *Rico Lin*irc: ricolin
>
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
___

Re: [Openstack-operators] octavia worker on ocata

2018-05-10 Thread Ignazio Cassano
I am sorry,
I forgot to setup topic attribute in oslo_messaging section.

Regards
Ignazio

2018-05-10 11:58 GMT+02:00 Ignazio Cassano :

> Hello everyone,
> I've just installed octavia on ocata .
> All octavia services are running except worker.
> It reports the following error in worker.log:
>
> 2018-05-10 11:33:27.404 121193 ERROR oslo_service.service InvalidTarget: A
> server's target must have topic and server names specified: server=podto2-octavia>
> 2018-05-10 11:33:27.404 121193 ERROR oslo_service.service
>
> Could anyone help me ?
> Regards
> Ignazio
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] octavia worker on ocata

2018-05-10 Thread Ignazio Cassano
Hello everyone,
I've just installed octavia on ocata .
All octavia services are running except worker.
It reports the following error in worker.log:

2018-05-10 11:33:27.404 121193 ERROR oslo_service.service InvalidTarget: A
server's target must have topic and server names specified:
2018-05-10 11:33:27.404 121193 ERROR oslo_service.service

Could anyone help me ?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators