Re: [openstack-dev] [all][sdk][heat] Integrating OpenStack and k8s with a service broker

2018-06-08 Thread Rico Lin
Zane Bitter  於 2018年6月9日 週六 上午9:20寫道:
>
> IIUC you're talking about a Heat resource that calls out to a service
> broker using the Open Service Broker API? (Basically acting like the
> Kubernetes Service Catalog.) That would be cool, as it would allow us to
> orchestrate services written for Kubernetes/CloudFoundry using Heat.
> Although probably not as easy as it sounds at first glance ;)
In my previous glance, I was thought about our new service will also wrap
up API with Ansible playbooks. A playbook to create a resource, and another
playbook to control Service Broker API. So we can directly use that
playbook instead
of calling Service broker APIs. No?:)

I think we can start trying to build playbooks before we start planning on
crazy ideas:)
>
> It wouldn't rely on _this_ set of playbook bundles though, because this
> one is only going to expose OpenStack resources, which are already
> exposed in Heat. (Unless you're suggesting we replace all of the current
> resource plugins in Heat with Ansible playbooks via the service broker?
> In which case... that's not gonna happen ;)
Right, we should use OS::Heat::Stack to expose resources from other
OpenStack, not with this.
>
> So Heat could adopt this at any time to add support for resources
> exposed by _other_ service brokers, such as the AWS/Azure/GCE service
> brokers or other playbooks exposed through Automation Broker.
>

I like the idea to add support for resources exposed by other service
borkers

--
May The Force of OpenStack Be With You,

Rico Lin 林冠宇
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk][heat] Integrating OpenStack and k8s with a service broker

2018-06-08 Thread Zane Bitter

On 08/06/18 02:40, Rico Lin wrote:

Thanks, Zane for putting this up.
It's a great service to expose infrastructure to application, and a 
potential cross-community works as well.

 >
 > Would you be interested in working on a new project to implement this
 > integration? Reply to this thread and let's collect a list of volunteers
 > to form the initial core review team.
 >
Glad to help

 > I'd prefer to go with the pure-Ansible autogenerated way so we can have
 > support for everything, but looking at the GCP[5]/Azure[4]/AWS[3]
 > brokers they have 10, 11 and 17 services respectively, so arguably we
 > could get a comparable number of features exposed without investing
 > crazy amounts of time if we had to write templates explicitly.
 >
If we going to generate another project to provide this service, I 
believe to use pure-Ansible will be a better option indeed.


TBH I don't think we can know for sure until we've tried building a few 
playbooks by hand and figured out whether they're similar enough that we 
can autogenerate them all, or if they need so much hand-tuning that it 
isn't feasible. But I'm a big fan of autogeneration if it works.


Once service gets stable, it's actually quite easy(at first glance) for 
Heat to adopt this (just create a service broker with our new service 
while creating a resource I believe?).


IIUC you're talking about a Heat resource that calls out to a service 
broker using the Open Service Broker API? (Basically acting like the 
Kubernetes Service Catalog.) That would be cool, as it would allow us to 
orchestrate services written for Kubernetes/CloudFoundry using Heat. 
Although probably not as easy as it sounds at first glance ;)


It wouldn't rely on _this_ set of playbook bundles though, because this 
one is only going to expose OpenStack resources, which are already 
exposed in Heat. (Unless you're suggesting we replace all of the current 
resource plugins in Heat with Ansible playbooks via the service broker? 
In which case... that's not gonna happen ;)


So Heat could adopt this at any time to add support for resources 
exposed by _other_ service brokers, such as the AWS/Azure/GCE service 
brokers or other playbooks exposed through Automation Broker.


Sounds like the use case of service broker might be when application 
request for a single resource exposed with Broker. And the resource 
dependency will be relatively simple. And we should just keep it simple 
and don't start thinking about who and how that application was created 
and keep the application out of dependency (I mean if the user likes to 
manage the total dependency, they can consider using heat with service 
broker once we integrated).


--
May The Force of OpenStack Be With You,

Rico Lin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] matching webhook vs alarm list

2018-06-08 Thread Eric K
Hi I'm building integration with Vitrage webhook and looking for some
clarification on what ID to use for matching a webhook notification to
the specific alarm from the alarm list. In the sample alarm list
response, there is an 'id' field and a 'vitrage_id' field [1], where
as in the sample webhook notification payload, there is a 'vitrage_id'
field [2]. I'd assume we can match by the 'vitrage_id', but the
samples have very different formats for 'vitrage_id', so I just want
to confirm. Thank you!

[1] https://docs.openstack.org/vitrage/latest/contributor/vitrage-api.html#id22
[2]
{
  "notification": "vitrage.alarm.activate",
  "payload": {
"vitrage_id": "2def31e9-6d9f-4c16-b007-893caa806cd4",
"resource": {
  "vitrage_id": "437f1f4c-ccce-40a4-ac62-1c2f1fd9f6ac",
  "name": "app-1-server-1-jz6qvznkmnif",
  "update_timestamp": "2018-01-22 10:00:34.327142+00:00",
  "vitrage_category": "RESOURCE",
  "vitrage_operational_state": "OK",
  "vitrage_type": "nova.instance",
  "project_id": "8f007e5ba0944e84baa6f2a4f2b5d03a",
  "id": "9b7d93b9-94ec-41e1-9cec-f28d4f8d702c"
},
"update_timestamp": "2018-01-22T10:00:34Z",
"vitrage_category": "ALARM",
"state": "Active",
"vitrage_type": "vitrage",
"vitrage_operational_severity": "WARNING",
"name": "Instance memory performance degraded"
  }
}
https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plugin.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] update_timestamp precision

2018-06-08 Thread Eric K
Hi I'm building integration with Vitrage webhook and looking for some
clarification on the timestamp precision to expect.

In the sample webhook payload found in doc the resource and the alarm
shows different time stamp precisions:
https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plug
in.html


Thank you!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-08 Thread Kendall Nelson
I've been slowly working on building out and adding to the documentation
about how to do things, but I can make that my top priority so that you all
have a little more guidance. I'll try to get some patches out in the next
week or so.

Storyboard seems complicated but I think most of the mental hoops are just
that you have the flexibility to manage and organize work however you'd
like. You being both an individual and you being the project. Also, each
project is so different (some use bps and specs, some ignore bps entirely,
others use milestones, some just care about what bugs are new...) we didn't
want to force a lot of required fields and whatnot on users. I can see how
the flexibility can be a bit daunting though. Hopefully the docs I write
will help clarify things.

Also, that video does talk a little bit about usage at the end actually.
Adam goes into different ways of using worklists and boards to organize
things. Keep an eye out for my patches though :)

-Kendall (diablo_rojo)

On Fri, Jun 8, 2018 at 12:06 PM Matt Riedemann  wrote:

> On 6/8/2018 1:03 PM, Jay S Bryant wrote:
> > Helps if I include the link to the video:
> >
> https://www.openstack.org/videos/boston-2017/storyboard-101-survival-guide-to-the-great-migration
> >
> >
> > Hope that helps.
>
> Yeah I linked to that in my original email - that's about the migration,
> not usage. I want to see stuff like what's the normal workflow for a new
> user to storyboard, reporting bugs, linking those to gerrit changes,
> multiple tasks, how to search for stuff (search was failing me big time
> yesterday), etc.
>
> Also, I can't even add a 2nd task to an existing story without getting a
> 500 transaction error from the DB, so that seems like a major scaling
> issue if we're all going to be migrating.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: UC Meeting Monday 1400UTC

2018-06-08 Thread Melvin Hillsman
Hey everyone,

Please see https://wiki.openstack.org/wiki/Governance/
Foundation/UserCommittee for UC meeting info and add additional agenda
items if needed.

-- 
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-08 Thread Matt Riedemann

On 6/8/2018 1:03 PM, Jay S Bryant wrote:
Helps if I include the link to the video: 
https://www.openstack.org/videos/boston-2017/storyboard-101-survival-guide-to-the-great-migration 



Hope that helps.


Yeah I linked to that in my original email - that's about the migration, 
not usage. I want to see stuff like what's the normal workflow for a new 
user to storyboard, reporting bugs, linking those to gerrit changes, 
multiple tasks, how to search for stuff (search was failing me big time 
yesterday), etc.


Also, I can't even add a 2nd task to an existing story without getting a 
500 transaction error from the DB, so that seems like a major scaling 
issue if we're all going to be migrating.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-08 Thread Jay S Bryant



On 6/7/2018 5:48 PM, Matt Riedemann wrote:

On 6/7/2018 3:25 PM, Kendall Nelson wrote:
I know it doesn't fit the shiny user facing docket that was discussed 
at the Forum, but I do think its time we make migration official in 
some capacity as a release goal or some other way. Having migrated 
Ironic and having TripleO on the schedule for migration (as requested 
during the last goal discussion) in addition to having migrated Heat, 
Barbican and several others in the last few months we have reached 
the point that I think migration of the rest of the projects is 
attainable by the end of Stein.


Thoughts?


I haven't used it much, but it would be really nice if someone could 
record a modern 'how to storyboard' video for just basic usage/flows 
since most people are used to launchpad by now so dealing with an 
entirely new task tracker is not trivial (or at least, not something I 
want to spend a lot of time figuring out).


I found:

https://www.youtube.com/watch?v=b2vJ9G5pNb4

https://www.youtube.com/watch?v=n_PaKuN4Skk

But those are a bit old.

Helps if I include the link to the video: 
https://www.openstack.org/videos/boston-2017/storyboard-101-survival-guide-to-the-great-migration


Hope that helps.

Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-08 Thread Eric Fried
There is now a blueprint [1] and draft spec [2].  Reviews welcomed.

[1] https://blueprints.launchpad.net/nova/+spec/reshape-provider-tree
[2] https://review.openstack.org/#/c/572583/

On 06/04/2018 06:00 PM, Eric Fried wrote:
> There has been much discussion.  We've gotten to a point of an initial
> proposal and are ready for more (hopefully smaller, hopefully
> conclusive) discussion.
> 
> To that end, there will be a HANGOUT tomorrow (TUESDAY, JUNE 5TH) at
> 1500 UTC.  Be in #openstack-placement to get the link to join.
> 
> The strawpeople outlined below and discussed in the referenced etherpad
> have been consolidated/distilled into a new etherpad [1] around which
> the hangout discussion will be centered.
> 
> [1] https://etherpad.openstack.org/p/placement-making-the-(up)grade
> 
> Thanks,
> efried
> 
> On 06/01/2018 01:12 PM, Jay Pipes wrote:
>> On 05/31/2018 02:26 PM, Eric Fried wrote:
 1. Make everything perform the pivot on compute node start (which can be
     re-used by a CLI tool for the offline case)
 2. Make everything default to non-nested inventory at first, and provide
     a way to migrate a compute node and its instances one at a time (in
     place) to roll through.
>>>
>>> I agree that it sure would be nice to do ^ rather than requiring the
>>> "slide puzzle" thing.
>>>
>>> But how would this be accomplished, in light of the current "separation
>>> of responsibilities" drawn at the virt driver interface, whereby the
>>> virt driver isn't supposed to talk to placement directly, or know
>>> anything about allocations?
>> FWIW, I don't have a problem with the virt driver "knowing about
>> allocations". What I have a problem with is the virt driver *claiming
>> resources for an instance*.
>>
>> That's what the whole placement claims resources things was all about,
>> and I'm not interested in stepping back to the days of long racy claim
>> operations by having the compute nodes be responsible for claiming
>> resources.
>>
>> That said, once the consumer generation microversion lands [1], it
>> should be possible to *safely* modify an allocation set for a consumer
>> (instance) and move allocation records for an instance from one provider
>> to another.
>>
>> [1] https://review.openstack.org/#/c/565604/
>>
>>> Here's a first pass:
>>>
>>> The virt driver, via the return value from update_provider_tree, tells
>>> the resource tracker that "inventory of resource class A on provider B
>>> have moved to provider C" for all applicable AxBxC.  E.g.
>>>
>>> [ { 'from_resource_provider': ,
>>>  'moved_resources': [VGPU: 4],
>>>  'to_resource_provider': 
>>>    },
>>>    { 'from_resource_provider': ,
>>>  'moved_resources': [VGPU: 4],
>>>  'to_resource_provider': 
>>>    },
>>>    { 'from_resource_provider': ,
>>>  'moved_resources': [
>>>  SRIOV_NET_VF: 2,
>>>  NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000,
>>>  NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000,
>>>  ],
>>>  'to_resource_provider': 
>>>    }
>>> ]
>>>
>>> As today, the resource tracker takes the updated provider tree and
>>> invokes [1] the report client method update_from_provider_tree [2] to
>>> flush the changes to placement.  But now update_from_provider_tree also
>>> accepts the return value from update_provider_tree and, for each "move":
>>>
>>> - Creates provider C (as described in the provider_tree) if it doesn't
>>> already exist.
>>> - Creates/updates provider C's inventory as described in the
>>> provider_tree (without yet updating provider B's inventory).  This ought
>>> to create the inventory of resource class A on provider C.
>>
>> Unfortunately, right here you'll introduce a race condition. As soon as
>> this operation completes, the scheduler will have the ability to throw
>> new instances on provider C and consume the inventory from it that you
>> intend to give to the existing instance that is consuming from provider B.
>>
>>> - Discovers allocations of rc A on rp B and POSTs to move them to rp C*.
>>
>> For each consumer of resources on rp B, right?
>>
>>> - Updates provider B's inventory.
>>
>> Again, this is problematic because the scheduler will have already begun
>> to place new instances on B's inventory, which could very well result in
>> incorrect resource accounting on the node.
>>
>> We basically need to have one giant new REST API call that accepts the
>> list of "move instructions" and performs all of the instructions in a
>> single transaction. :(
>>
>>> (*There's a hole here: if we're splitting a glommed-together inventory
>>> across multiple new child providers, as the VGPUs in the example, we
>>> don't know which allocations to put where.  The virt driver should know
>>> which instances own which specific inventory units, and would be able to
>>> report that info within the data structure.  That's getting kinda close
>>> to the virt driver mucking with allocations, but maybe it fits well
>>> enough into this model to be 

Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-08 Thread Gerald McBrearty
Dan Smith  wrote on 06/08/2018 08:46:01 AM:

> From: Dan Smith 
> To: melanie witt 
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
> , 
openstack-operat...@lists.openstack.org
> Date: 06/08/2018 08:48 AM
> Subject: Re: [openstack-dev] [nova] increasing the number of allowed
> volumes attached per instance > 26
> 
> > Some ideas that have been discussed so far include:
> 
> FYI, these are already in my order of preference.
> 
> > A) Selecting a new, higher maximum that still yields reasonable
> > performance on a single compute host (64 or 128, for example). Pros:
> > helps prevent the potential for poor performance on a compute host
> > from attaching too many volumes. Cons: doesn't let anyone opt-in to a
> > higher maximum if their environment can handle it.
> 
> I prefer this because I think it can be done per virt driver, for
> whatever actually makes sense there. If powervm can handle 500 volumes
> in a meaningful way on one instance, then that's cool. I think libvirt's
> limit should likely be 64ish.
> 

As long as this can be done on a per virt driver basis as Dan says
I think also would prefer this option.

Actually the meaning fully number is much higher that 500 for powervm.
I'm thinking the powervm limit could likely be 4096ish. On powervm we have 

a OS where the meaningful limit is 4096 volumes but routinely most
operators would have between 1000-2000.

-Gerald

> > B) Creating a config option to let operators choose how many volumes
> > allowed to attach to a single instance. Pros: lets operators opt-in to
> > a maximum that works in their environment. Cons: it's not discoverable
> > for those calling the API.
> 
> This is a fine compromise, IMHO, as it lets operators tune it per
> compute node based on the virt driver and the hardware. If one compute
> is using nothing but iSCSI over a single 10g link, then they may need to
> clamp that down to something more sane.
> 
> Like the per virt driver restriction above, it's not discoverable via
> the API, but if it varies based on compute node and other factors in a
> single deployment, then making it discoverable isn't going to be very
> easy anyway.
> 
> > C) Create a configurable API limit for maximum number of volumes to
> > attach to a single instance that is either a quota or similar to a
> > quota. Pros: lets operators opt-in to a maximum that works in their
> > environment. Cons: it's yet another quota?
> 
> Do we have any other quota limits that are per-instance like this would
> be? If not, then this would likely be weird, but if so, then this would
> also be an option, IMHO. However, it's too much work for what is really
> not a hugely important problem, IMHO, and both of the above are
> lighter-weight ways to solve this and move on.
> 
> --Dan
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> INVALID URI REMOVED
> 
u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=i0r4x6W1L_PMd5Bym8J36w=Vg5MEvB0VELjModDoJF8PGcmUinnq-
> kfFxavTqfnYYw=xe_2YmabBZEJJmtBK-4LZPh68rG3UI6dVqoZq6zKlIA=
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-08 Thread Jay S Bryant



On 6/7/2018 5:48 PM, Matt Riedemann wrote:

On 6/7/2018 3:25 PM, Kendall Nelson wrote:
I know it doesn't fit the shiny user facing docket that was discussed 
at the Forum, but I do think its time we make migration official in 
some capacity as a release goal or some other way. Having migrated 
Ironic and having TripleO on the schedule for migration (as requested 
during the last goal discussion) in addition to having migrated Heat, 
Barbican and several others in the last few months we have reached 
the point that I think migration of the rest of the projects is 
attainable by the end of Stein.


Thoughts?


I haven't used it much, but it would be really nice if someone could 
record a modern 'how to storyboard' video for just basic usage/flows 
since most people are used to launchpad by now so dealing with an 
entirely new task tracker is not trivial (or at least, not something I 
want to spend a lot of time figuring out).


I found:

https://www.youtube.com/watch?v=b2vJ9G5pNb4

https://www.youtube.com/watch?v=n_PaKuN4Skk

But those are a bit old.


Matt,

The following presentation was done in Boston.  If I remember correctly 
they covered some of the basics on how to use Storyboard. [1]


I feel your pain on the migration.  I have used it a bit and and it is 
kind of like a combination of Launchpad and Trello.  I think once we 
start using it the learning curve won't be so bad.


Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-08 Thread Dan Smith
> Some ideas that have been discussed so far include:

FYI, these are already in my order of preference.

> A) Selecting a new, higher maximum that still yields reasonable
> performance on a single compute host (64 or 128, for example). Pros:
> helps prevent the potential for poor performance on a compute host
> from attaching too many volumes. Cons: doesn't let anyone opt-in to a
> higher maximum if their environment can handle it.

I prefer this because I think it can be done per virt driver, for
whatever actually makes sense there. If powervm can handle 500 volumes
in a meaningful way on one instance, then that's cool. I think libvirt's
limit should likely be 64ish.

> B) Creating a config option to let operators choose how many volumes
> allowed to attach to a single instance. Pros: lets operators opt-in to
> a maximum that works in their environment. Cons: it's not discoverable
> for those calling the API.

This is a fine compromise, IMHO, as it lets operators tune it per
compute node based on the virt driver and the hardware. If one compute
is using nothing but iSCSI over a single 10g link, then they may need to
clamp that down to something more sane.

Like the per virt driver restriction above, it's not discoverable via
the API, but if it varies based on compute node and other factors in a
single deployment, then making it discoverable isn't going to be very
easy anyway.

> C) Create a configurable API limit for maximum number of volumes to
> attach to a single instance that is either a quota or similar to a
> quota. Pros: lets operators opt-in to a maximum that works in their
> environment. Cons: it's yet another quota?

Do we have any other quota limits that are per-instance like this would
be? If not, then this would likely be weird, but if so, then this would
also be an option, IMHO. However, it's too much work for what is really
not a hugely important problem, IMHO, and both of the above are
lighter-weight ways to solve this and move on.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation

2018-06-08 Thread Erno Kuvaja
Hi,

Answering inline.

Best,
Erno "jokke" Kuvaja

On Thu, Jun 7, 2018 at 11:49 AM, Csatari, Gergely (Nokia -
HU/Budapest)  wrote:
> Hi,
>
>
>
> I did some work ont he figures and realised, that I have some questions
> related to the alternative options:
>
>
>
> Multiple backends option:
>
> What is the API between Glance and the Glance backends?
glance_store library
> How is it possible to implement location aware synchronisation (synchronise
> images only to those cloud instances where they are needed)?
This needs bit of hooking. We need to update the locations into Glance
once the replication has happened.
> Is it possible to have different OpenStack versions in the different cloud
> instances?
In my understanding it's not supported to mix versions within
OpenStack cloud apart from during upgrade.
> Can a cloud instance use the locally synchronised images in case of a
> network connection break?
That depends a lot of the implementation. If there is local glance
node with replicated db and store, yes.
> Is it possible to implement this without storing database credentials ont he
> edge cloud instances?
Again depending of the deployment. You definitely cannot have both,
access during network outage and access without db credentials. if one
needs to have local access of images without db credentials, there is
always possibility for the local Ceph back-end with remote glance-api
node. In this case Nova can talk directly to the local Ceph back-end
and communicate with centralized glance-api that has the credentials
to the db. The problem with loosing the network in this scenario is
that Nova will have no idea if the user has rights to use the image or
not and it will not know the path to that image's data.
>
>
>
> Independent synchronisation service:
>
> If I understood [1] correctly mixmatch can help Nova to attach a remote
> volume, but it will not help in synchronizing the images. is this true?
>
>
>
>
>
> As I promised in the Edge Compute Group call I plan to organize an IRC
> review meeting to check the wiki. Please indicate your availability in [2].
>
>
>
> [1]: https://mixmatch.readthedocs.io/en/latest/
>
> [2]: https://doodle.com/poll/bddg65vyh4qwxpk5
>
>
>
> Br,
>
> Gerg0
>
>
>
> From: Csatari, Gergely (Nokia - HU/Budapest)
> Sent: Wednesday, May 23, 2018 8:59 PM
> To: OpenStack Development Mailing List (not for usage questions)
> ; edge-comput...@lists.openstack.org
> Subject: [edge][glance]: Wiki of the possible architectures for image
> synchronisation
>
>
>
> Hi,
>
>
>
> Here I send the wiki page [1] where I summarize what I understood from the
> Forum session about image synchronisation in edge environment [2], [3].
>
>
>
> Please check and correct/comment.
>
>
>
> Thanks,
>
> Gerg0
>
>
>
>
>
> [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment
>
> [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images
>
> [3]:
> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure
>
>
> ___
> Edge-computing mailing list
> edge-comput...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-08 Thread Rico Lin
IMO, the goal is that we try to encourage the good, not to get in the way
to those who can't reach that goal.

A tag is a good way to encourage, but it also not a fair way for those
projects who barely got enough core member to review (Think about those
projects got less than four active cores). Wondering if anyone got ideas on
how we can reach that goal (tag can be a way, just IMO need to provide a
fair condition to all).

How about we set policy and document to encourage people to join core
reviewer (this can join forces with the Enterprise guideline we plan in
Forum) if they wish to provide diversity to project.

On the second idea, I think TC (or people who powered by TC) should provide
(or guidance project to provide) a health check report for projects. TCs
have been looking for Liaisons with projects ([1]). This definitely is a
good report as a feedback from projects to TC. (also a good way to
understand what each project been doing and is that project need any help).
So to provide a guideline for projects to understand how they can do
better. Guideline means both -1 and  +1 (for who running projects for long
enough to be a core/PTL, should at least understand that -1 only means
since this project is under TC's guidance, we just try to help.). Therefore
a -1 is important.

As an alternative, we can also try to target problem when it occurred, but
personally wonder who as a single core reviewer in team dear to speak out
in this case.

I think this is a hard issue to do, but we have to pick one action from all
actions and try to run and see. And it's better than keep things the way
they are and ignore things.


[1] https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Liaisons

Michael Johnson  於 2018年6月7日 週四 上午2:48寫道:

> Octavia also has an informal rule about two cores from the same
> company merging patches. I support this because it makes sure we have
> a diverse perspective on the patches. Specifically it has worked well
> for us as all of the cores have different cloud designs, so it catches
> anything that would limit/conflict with the different OpenStack
> topologies.
>
> That said, we don't hard enforce this or police it, it is just an
> informal policy to make sure we get input from the wider team.
> Currently we only have one company with two cores.
>
> That said, my issue with the current diversity calculations is they
> tend to be skewed by the PTL role. People have a tendency to defer to
> the PTL to review/comment/merge patches, so if the PTL shares a
> company with another core the diversity numbers get skewed heavily
> towards that company.
>
> Michael
>
> On Wed, Jun 6, 2018 at 5:06 AM,   wrote:
> >> -Original Message-
> >> From: Doug Hellmann 
> >> Sent: Monday, June 4, 2018 5:52 PM
> >> To: openstack-dev 
> >> Subject: Re: [openstack-dev] [tc] Organizational diversity tag
> >>
> >> Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:
> >> > On 02/06/18 13:23, Doug Hellmann wrote:
> >> > > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:
> >> > >> On 01/06/18 12:18, Doug Hellmann wrote:
> >> > >
> >> > > [snip]
> >> > Apparently enough people see it the way you described that this is
> >> > probably not something we want to actively spread to other projects at
> >> > the moment.
> >>
> >> I am still curious to know which teams have the policy. If it is more
> >> widespread than I realized, maybe it's reasonable to extend it and use
> it as
> >> the basis for a health check after all.
> >>
> >
> > A while back, Trove had this policy. When Rackspace, HP, and Tesora had
> core reviewers, (at various times, eBay, IBM and Red Hat also had cores),
> the agreement was that multiple cores from any one company would not merge
> a change unless it was an emergency. It was not formally written down (to
> my knowledge).
> >
> > It worked well, and ensured that the operators didn't get surprised by
> some unexpected thing that took down their service.
> >
> > -amrith
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation

2018-06-08 Thread Waines, Greg
Responses in-lined below,
Greg.

From: "Csatari, Gergely (Nokia - HU/Budapest)" 
Date: Friday, June 8, 2018 at 3:39 AM
To: Greg Waines , 
"openstack-dev@lists.openstack.org" , 
"edge-comput...@lists.openstack.org" 
Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible 
architectures for image synchronisation

Hi,

Going inline.

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Thursday, June 7, 2018 2:24 PM


I had some additional questions/comments on the Image Synchronization Options ( 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ):


One Glance with multiple backends

  *   In this scenario, are all Edge Clouds simply configured with the one 
central glance for its GLANCE ENDPOINT ?
 *   i.e. GLANCE is a typical shared service in a multi-region environment ?

[G0]: In my understanding yes.


  *   If so,
how does this OPTION support the requirement for Edge Cloud Operation when 
disconnected from Central Location ?

[G0]: This is an open question for me also.


Several Glances with an independent synchronization service(PUSH)

  *   I refer to this as the PUSH model
  *   I don’t believe you have to ( or necessarily should) rely on the backend 
to do the synchronization of the images
 *   i.e. the ‘Synch Service’ could do this strictly through Glance REST 
APIs
(making it independent of the particular Glance backend ... and allowing the 
Glance Backends at Central and Edge sites to actually be different)
[G0]: Okay, I can update the wiki to reflect this. Should we keep the 
“synchronization by the backend” option as an other alternative?
[Greg] Yeah we should keep it as an alternative.

  *   I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ 
distribution of Images from Central to Edge for Image Synchronization
 *   i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites 
... especially for the small Edge Sites
[G0]: Yes, the question is how to define these synchronization policies.
[Greg] Agreed ... we’ve had some very high-level discussions with end users, 
but haven’t put together a proposal yet.

  *   Not sure ... but I didn’t think this was the model being used in mixmatch 
... thought mixmatch was more the PULL model (below)

[G0]: Yes, this is more or less my understanding. I remove the mixmatch 
reference from this chapter.

One Glance and multiple Glance API Servers   (PULL)

  *   I refer to this as the PULL model
  *   This is the current model supported in StarlingX’s Distributed Cloud 
sub-project
 *   We run glance-api on all Edge Clouds ... that talk to glance-registry 
on the Central Cloud, and
 *   We have glance-api setup for caching such that only the first access 
to an particular image incurs the latency of the image transfer from Central to 
Edge
[G0]: Do you do image caching in Glance API or do you rely in the image cache 
in Nova? In the Forum session there were some discussions about this and I 
think the conclusion was that using the image cache of Nova is enough.
[Greg] We enabled image caching in the Glance API.
 I believe that Nova Image Caching caches at the compute node ... 
this would work ok for all-in-one edge clouds or small edge clouds.
 But glance-api caching caches at the edge cloud level, so works 
better for large edge clouds with lots of compute nodes.

  *
  *   this PULL model affectively implements the location aware synchronization 
you talk about below,  (i.e. synchronise images only to those cloud instances 
where they are needed)?


In StarlingX Distributed Cloud,
We plan on supporting both the PUSH and PULL model ... suspect there are use 
cases for both.

[G0]: This means that you need an architecture supporting both.
Just for my curiosity what is the use case for the pull model once you have the 
push model in place?
[Greg] The PULL model certainly results in the most efficient distribution of 
images ... basically images are distributed ONLY to edge clouds that explicitly 
use the image.
Also if the use case is NOT concerned about incurring the latency of the image 
transfer from Central to Edge on the FIRST use of image then the PULL model 
could be preferred ... TBD.

Here is the updated wiki: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment
[Greg] Looks good.

Greg.


Thanks,
Gerg0




From: "Csatari, Gergely (Nokia - HU/Budapest)" 
mailto:gergely.csat...@nokia.com>>
Date: Thursday, June 7, 2018 at 6:49 AM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>, 
"edge-comput...@lists.openstack.org" 
mailto:edge-comput...@lists.openstack.org>>
Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible 
architectures for image synchronisation

Hi,

I did some work ont he figures and realised, that I have some questions related 
to the alternative options:


Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-08 Thread Kashyap Chamarthy
On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote:
> On 6/7/2018 12:56 PM, melanie witt wrote:
> > Recently, we've received interest about increasing the maximum number of
> > allowed volumes to attach to a single instance > 26. The limit of 26 is
> > because of a historical limitation in libvirt (if I remember correctly)
> > and is no longer limited at the libvirt level in the present day. So,
> > we're looking at providing a way to attach more than 26 volumes to a
> > single instance and we want your feedback.
> 
> The 26 volumes thing is a libvirt driver restriction.

The original limitation of 26 disks was because at that time there was
no 'virtio-scsi'.  

(With 'virtio-scsi', each of its controller allows upto 256 targets, and
each target can use any LUN (Logical Unit Number) from 0 to 16383
(inclusive).  Therefore, the maxium allowable disks on a single
'virtio-scsi' controller is 256 * 16384 == 4194304.)  Source[1].

[...]

> > Some ideas that have been discussed so far include:
> > 
> > A) Selecting a new, higher maximum that still yields reasonable
> > performance on a single compute host (64 or 128, for example). Pros:
> > helps prevent the potential for poor performance on a compute host from
> > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher
> > maximum if their environment can handle it.

Option (A) can still be considered: We can limit it to 256 disks.  Why?

FWIW, I did some digging here:

The upstream libguestfs project after some thorough testing, arrived at
a limit of 256 disks, and suggest the same for Nova.  And if anyone
wants to increase that limit, the proposer should come up with a fully
worked through test plan. :-) (Try doing any meaningful I/O to so many
disks at once, and see how well that works out.)

What more, the libguestfs upstream tests 256 disks, and even _that_
fails sometimes:

https://bugzilla.redhat.com/show_bug.cgi?id=1478201 -- "kernel runs
out of memory with 256 virtio-scsi disks"

The above bug is fixed now in kernel-4.17.0-0.rc3.git1.2. (And also
required a corresponding fix in QEMU[2], which is available from version
v2.11.0 onwards.)

[...]


[1] https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg02823.html
-- virtio-scsi limits
[2] https://git.qemu.org/?p=qemu.git;a=commit;h=5c0919d 

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][osc] Documenting compute API microversion gaps in OSC

2018-06-08 Thread Sylvain Bauza
On Fri, Jun 8, 2018 at 3:35 AM, Matt Riedemann  wrote:

> I've started an etherpad [1] to identify the compute API microversion gaps
> in python-openstackclient.
>
> It's a small start right now so I would appreciate some help on this, even
> just a few people looking at a couple of these per day would get it done
> quickly.
>
> Not all compute API microversions will require explicit changes to OSC,
> for example 2.3 [2] just adds some more fields to some API responses which
> might automatically get dumped in "show" commands. We just need to verify
> that the fields that come back in the response are actually shown by the
> CLI and then mark it in the etherpad.
>
> Once we identify the gaps, we can start talking about actually closing
> those gaps and deprecating the nova CLI, which could be part of a community
> wide goal - but there are other things going on in OSC right now (major
> refactor to use the SDK, core reviewer needs) so we'll have to figure out
> when the time is right.
>
> [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc
> [2] https://docs.openstack.org/nova/latest/reference/api-microve
> rsion-history.html#maximum-in-kilo
>
>
Good idea, Matt. I think we could maybe discuss with the First Contact SIG
because it looks to me some developers could help us for that, while it
doesn't need to be a Nova expert.

I'll also try to see how I can help on this.
-Sylvain

-- 
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-08 Thread Rico Lin
Sean McGinnis  於 2018年6月5日 週二 上午2:07寫道:
>
> Python 3 First
> ==
>
> One of the things brought up in the session was picking things that bring
> excitement and are obvious benefits to deployers and users of OpenStack
> services. While this one is maybe not as immediately obvious, I think this
> is something that will end up helping deployers and also falls into the
tech
> debt reduction category that will help us move quicker long term.
>
> Python 2 is going away soon, so I think we need something to help compel
folks
> to work on making sure we are ready to transition. This will also be a
good
> point to help switch the mindset over to Python 3 being the default used
> everywhere, with our Python 2 compatibility being just to continue legacy
> support.
>
+1 on Python3 first goal
I think it's great if we can start investigating how projects been doing
with py3.5+. And to have a check job for py3.6 will be a nice start for
this goal. If it's possible, our goal should match with what most users are
facing. Mention 3.6  because of Ubuntu Artful and Fedora 26 use it by
default (see [1] for more info).

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131193.html


--
May The Force of OpenStack Be With You,
Rico Lin
irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-08 Thread Rico Lin
Matt Riedemann  於 2018年6月8日 週五 上午6:49寫道:

> I haven't used it much, but it would be really nice if someone could
> record a modern 'how to storyboard' video for just basic usage/flows
> since most people are used to launchpad by now so dealing with an
> entirely new task tracker is not trivial (or at least, not something I
> want to spend a lot of time figuring out).
>
> I found:
>
> https://www.youtube.com/watch?v=b2vJ9G5pNb4
>
> https://www.youtube.com/watch?v=n_PaKuN4Skk
>
> But those are a bit old.
>
I create an Etherpad to collect Q on Migrate from Launchpad to StoryBoard
for Heat (most information were general). Hope this helps
https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
May The Force of OpenStack Be With You,
Rico Lin
irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] Stein Goal Selection

2018-06-08 Thread Rico Lin
Kendall Nelson  於 2018年6月8日 週五 上午4:26寫道:
>
> I think that these two goals definitely fit the criteria we discussed in
Vancouver during the S Release Goal Forum Session. I know Storyboard
Migration was also mentioned after I had to dip out to another session so I
wanted to follow up on that.
>
+1. To migrate to StoryBoard do seems like a good way to go.
Heat just moved to StoryBoard, so there is no much long-term running
experiences to share about, but it does look like a good way to target the
piece which we been missing of. A workflow to connect users, ops, and
developers (within Launchpad, we only care about bugs, and what generate
that bug? well...we don't care). With Story + Task-oriented things can
change (To me this is shiny).

For migrate experience, the migration is quick, so if there is no project
really really only can survive with Launchpad, I think there is no blocker
for this goal.

Also, it's quite convenient to target your story with your old bug, since
your story id is your bug id.

Since it might be difficult for all project directly migrated to it, IMO we
should at least have a potential goal for T release (or a long-term goal
for Stein?). Or we can directly set this as a Stein goal as well. Why?
Because of the very first Story ID actually started from 200(and as I
mentioned, after migrating, your story id is exactly your bug id ). So once
we generate bug with ID 200, things will become interesting (and hard
to migrate). Current is 1775759, so one or two years I guess?

To interpreted `might be difficult` above, The overall experience is great,
some small things should get improve:

   - I can't tell if current story is already reported or not. There is no
   way to filter stories and checking conflict if there is.
   - Things going slow if we try to use Board in StoryBoard to filter out a
   great number of stories (like when I need to see all `High Priority` tagged
   stories)
   - Needs better documentation, In Heat we create an Etherpad to describe
   and collect Q on how people can better adopt StoryBoard. It will be great
   if teams can directly get this information.

Overall, I think this is a nice goal, and it's actually painless to migrate.


--
May The Force of OpenStack Be With You,
Rico Lin
irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation

2018-06-08 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

Going inline.

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Thursday, June 7, 2018 2:24 PM

I had some additional questions/comments on the Image Synchronization Options ( 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ):


One Glance with multiple backends

  *   In this scenario, are all Edge Clouds simply configured with the one 
central glance for its GLANCE ENDPOINT ?
 *   i.e. GLANCE is a typical shared service in a multi-region environment ?

[G0]: In my understanding yes.


  *   If so,
how does this OPTION support the requirement for Edge Cloud Operation when 
disconnected from Central Location ?

[G0]: This is an open question for me also.


Several Glances with an independent synchronization service(PUSH)

  *   I refer to this as the PUSH model
  *   I don’t believe you have to ( or necessarily should) rely on the backend 
to do the synchronization of the images
 *   i.e. the ‘Synch Service’ could do this strictly through Glance REST 
APIs
(making it independent of the particular Glance backend ... and allowing the 
Glance Backends at Central and Edge sites to actually be different)
[G0]: Okay, I can update the wiki to reflect this. Should we keep the 
“synchronization by the backend” option as an other alternative?

  *   I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ 
distribution of Images from Central to Edge for Image Synchronization
 *   i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites 
... especially for the small Edge Sites
[G0]: Yes, the question is how to define these synchronization policies.

  *   Not sure ... but I didn’t think this was the model being used in mixmatch 
... thought mixmatch was more the PULL model (below)

[G0]: Yes, this is more or less my understanding. I remove the mixmatch 
reference from this chapter.

One Glance and multiple Glance API Servers   (PULL)

  *   I refer to this as the PULL model
  *   This is the current model supported in StarlingX’s Distributed Cloud 
sub-project
 *   We run glance-api on all Edge Clouds ... that talk to glance-registry 
on the Central Cloud, and
 *   We have glance-api setup for caching such that only the first access 
to an particular image incurs the latency of the image transfer from Central to 
Edge
[G0]: Do you do image caching in Glance API or do you rely in the image cache 
in Nova? In the Forum session there were some discussions about this and I 
think the conclusion was that using the image cache of Nova is enough.

  *
  *   this PULL model affectively implements the location aware synchronization 
you talk about below,  (i.e. synchronise images only to those cloud instances 
where they are needed)?


In StarlingX Distributed Cloud,
We plan on supporting both the PUSH and PULL model ... suspect there are use 
cases for both.

[G0]: This means that you need an architecture supporting both.
Just for my curiosity what is the use case for the pull model once you have the 
push model in place?

Here is the updated wiki: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment

Thanks,
Gerg0




From: "Csatari, Gergely (Nokia - HU/Budapest)" 
mailto:gergely.csat...@nokia.com>>
Date: Thursday, June 7, 2018 at 6:49 AM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>, 
"edge-comput...@lists.openstack.org" 
mailto:edge-comput...@lists.openstack.org>>
Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible 
architectures for image synchronisation

Hi,

I did some work ont he figures and realised, that I have some questions related 
to the alternative options:

Multiple backends option:

  *   What is the API between Glance and the Glance backends?
  *   How is it possible to implement location aware synchronisation 
(synchronise images only to those cloud instances where they are needed)?
  *   Is it possible to have different OpenStack versions in the different 
cloud instances?
  *   Can a cloud instance use the locally synchronised images in case of a 
network connection break?
  *   Is it possible to implement this without storing database credentials ont 
he edge cloud instances?

Independent synchronisation service:

  *   If I understood [1] correctly 
mixmatch can help Nova to attach a remote volume, but it will not help in 
synchronizing the images. is this true?


As I promised in the Edge Compute Group call I plan to organize an IRC review 
meeting to check the wiki. Please indicate your availability in 
[2].

[1]: https://mixmatch.readthedocs.io/en/latest/
[2]: https://doodle.com/poll/bddg65vyh4qwxpk5

Br,
Gerg0

From: Csatari, Gergely (Nokia - HU/Budapest)
Sent: Wednesday, May 23, 2018 8:59 PM
To: OpenStack Development Mailing List (not for usage questions) 

Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-08 Thread Rico Lin
Thanks, Zane for putting this up.
It's a great service to expose infrastructure to application, and a
potential cross-community works as well.
>
> Would you be interested in working on a new project to implement this
> integration? Reply to this thread and let's collect a list of volunteers
> to form the initial core review team.
>
Glad to help

> I'd prefer to go with the pure-Ansible autogenerated way so we can have
> support for everything, but looking at the GCP[5]/Azure[4]/AWS[3]
> brokers they have 10, 11 and 17 services respectively, so arguably we
> could get a comparable number of features exposed without investing
> crazy amounts of time if we had to write templates explicitly.
>
If we going to generate another project to provide this service, I believe
to use pure-Ansible will be a better option indeed.

Once service gets stable, it's actually quite easy(at first glance) for
Heat to adopt this (just create a service broker with our new service while
creating a resource I believe?).
Sounds like the use case of service broker might be when application
request for a single resource exposed with Broker. And the resource
dependency will be relatively simple. And we should just keep it simple and
don't start thinking about who and how that application was created and
keep the application out of dependency (I mean if the user likes to manage
the total dependency, they can consider using heat with service broker once
we integrated).

--
May The Force of OpenStack Be With You,

Rico Lin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev