Hi Bob 2014/1/9 Robert Kukura <rkuk...@redhat.com>: > On 01/09/2014 02:34 PM, Nachi Ueno wrote: >> Hi Doug >> >> 2014/1/9 Doug Hellmann <doug.hellm...@dreamhost.com>: >>> >>> >>> >>> On Thu, Jan 9, 2014 at 1:53 PM, Nachi Ueno <na...@ntti3.com> wrote: >>>> >>>> Hi folks >>>> >>>> Thank you for your input. >>>> >>>> The key difference from external configuration system (Chef, puppet >>>> etc) is integration with >>>> openstack services. >>>> There are cases a process should know the config value in the other hosts. >>>> If we could have centralized config storage api, we can solve this issue. >>>> >>>> One example of such case is neuron + nova vif parameter configuration >>>> regarding to security group. >>>> The workflow is something like this. >>>> >>>> nova asks vif configuration information for neutron server. >>>> Neutron server ask configuration in neutron l2-agent on the same host >>>> of nova-compute. >>> >>> >>> That extra round trip does sound like a potential performance bottleneck, >>> but sharing the configuration data directly is not the right solution. If >>> the configuration setting names are shared, they become part of the >>> integration API between the two services. Nova should ask neutron how to >>> connect the VIF, and it shouldn't care how neutron decides to answer that >>> question. The configuration setting is an implementation detail of neutron >>> that shouldn't be exposed directly to nova. >> >> I agree for nova - neutron if. >> However, neutron server and neutron l2 agent configuration depends on >> each other. >> >>> Running a configuration service also introduces what could be a single point >>> of failure for all of the other distributed services in OpenStack. An >>> out-of-band tool like chef or puppet doesn't result in the same sort of >>> situation, because the tool does not have to be online in order for the >>> cloud to be online. >> >> We can choose same implementation. ( Copy information in local cache etc) >> >> Thank you for your input, I could organize my thought. >> My proposal can be split for the two bps. >> >> [BP1] conf api for the other process >> Provide standard way to know the config value in the other process in >> same host or the other host. >> >> - API Example: >> conf.host('host1').firewall_driver >> >> - Conf file baed implementaion: >> config for each host will be placed in here. >> /etc/project/conf.d/{hostname}/agent.conf >> >> [BP2] Multiple backend for storing config files >> >> Currently, we have only file based configration. >> In this bp, we are extending support for config storage. >> - KVS >> - SQL >> - Chef - Ohai > > I'm not opposed to making oslo.config support pluggable back ends, but I > don't think BP2 could be depended upon to satisfy a requirement for a > global view of arbitrary config information, since this wouldn't be > available if a file-based backend were selected.
We can do it even if it's a file-based backend. Chef or puppet will copy some configuration on the sever side and agent side. The server read agent configuration stored in the server. > As far as the neutron server getting info it needs about running L2 > agents, this is currently done via the agents_db RPC, where each agent > periodically sends certain info to the server and the server stores it > in the DB for subsequent use. The same mechanism is also used for L3 and > DHCP agents, and probably for *aaS agents. Some agent config information > is included, as well as some stats, etc.. This mechanism does the job, > but could be generalized and improved a bit. But I think this flow of > information is really for specialized purposes - only a small subset of > the config info is passed, and other info is passed that doesn't come > from config. I agree on here. We need a generic framework to do.. - static config with server and agent - dynamic resource information and update - stats or liveness updates Today, we are re-inventing these frameworks in the different processes. > My only real concern with using this current mechanism is that some of > the information (stats and liveness) is very dynamic, while other > information (config) is relatively static. Its a bit wasteful to send > all of it every couple seconds, but at least liveness (heartbeat) info > does need to be sent frequently. BP1 sounds like it could address the > static part, but I'm still not sure config file info is the only > relatively static info that might need to be shared. I think neutron can > stick with its agents_db RPC, DB, and API extension for now, and improve > it as needed. I got it. It looks like the community tend to don't like this idea, so it's not good timing to do this in generic way. Let's work on this in neutron for now. > Doug, Jeremy , Jay, Greg Thank you for your inputs! I'll obsolete this bp. Nachi > -Bob > >> >> Best >> Nachi >> >>> Doug >>> >>> >>>> >>>> >>>> host1 >>>> neutron server >>>> nova-api >>>> >>>> host2 >>>> neturon l2-agent >>>> nova-compute >>>> >>>> In this case, a process should know the config value in the other hosts. >>>> >>>> Replying some questions >>>> >>>>> Adding a config server would dramatically change the way that >>>> configuration management tools would interface with OpenStack services. >>>> [Jay] >>>> >>>> Since this bp is just adding "new mode", we can still use existing config >>>> files. >>>> >>>>> why not help to make Chef or Puppet better and cover the more OpenStack >>>>> use-cases rather than add yet another competing system [Doug, Morgan] >>>> >>>> I believe this system is not competing system. >>>> The key point is we should have some standard api to access such services. >>>> As Oleg suggested, we can use sql-server, kv-store, or chef or puppet >>>> as a backend system. >>>> >>>> Best >>>> Nachi >>>> >>>> >>>> 2014/1/9 Morgan Fainberg <m...@metacloud.com>: >>>>> I agree with Doug’s question, but also would extend the train of thought >>>>> to >>>>> ask why not help to make Chef or Puppet better and cover the more >>>>> OpenStack >>>>> use-cases rather than add yet another competing system? >>>>> >>>>> Cheers, >>>>> Morgan >>>>> >>>>> On January 9, 2014 at 10:24:06, Doug Hellmann >>>>> (doug.hellm...@dreamhost.com) >>>>> wrote: >>>>> >>>>> What capabilities would this new service give us that existing, proven, >>>>> configuration management tools like chef and puppet don't have? >>>>> >>>>> >>>>> On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno <na...@ntti3.com> wrote: >>>>>> >>>>>> Hi Flavio >>>>>> >>>>>> Thank you for your input. >>>>>> I agree with you. oslo.config isn't right place to have server side >>>>>> code. >>>>>> >>>>>> How about oslo.configserver ? >>>>>> For authentication, we can reuse keystone auth and oslo.rpc. >>>>>> >>>>>> Best >>>>>> Nachi >>>>>> >>>>>> >>>>>> 2014/1/9 Flavio Percoco <fla...@redhat.com>: >>>>>>> On 08/01/14 17:13 -0800, Nachi Ueno wrote: >>>>>>>> >>>>>>>> Hi folks >>>>>>>> >>>>>>>> OpenStack process tend to have many config options, and many hosts. >>>>>>>> It is a pain to manage this tons of config options. >>>>>>>> To centralize this management helps operation. >>>>>>>> >>>>>>>> We can use chef or puppet kind of tools, however >>>>>>>> sometimes each process depends on the other processes configuration. >>>>>>>> For example, nova depends on neutron configuration etc >>>>>>>> >>>>>>>> My idea is to have config server in oslo.config, and let cfg.CONF >>>>>>>> get >>>>>>>> config from the server. >>>>>>>> This way has several benefits. >>>>>>>> >>>>>>>> - We can get centralized management without modification on each >>>>>>>> projects ( nova, neutron, etc) >>>>>>>> - We can provide horizon for configuration >>>>>>>> >>>>>>>> This is bp for this proposal. >>>>>>>> https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized >>>>>>>> >>>>>>>> I'm very appreciate any comments on this. >>>>>>> >>>>>>> >>>>>>> >>>>>>> I've thought about this as well. I like the overall idea of having a >>>>>>> config server. However, I don't like the idea of having it within >>>>>>> oslo.config. I'd prefer oslo.config to remain a library. >>>>>>> >>>>>>> Also, I think it would be more complex than just having a server that >>>>>>> provides the configs. It'll need authentication like all other >>>>>>> services in OpenStack and perhaps even support of encryption. >>>>>>> >>>>>>> I like the idea of a config registry but as mentioned above, IMHO >>>>>>> it's >>>>>>> to live under its own project. >>>>>>> >>>>>>> That's all I've got for now, >>>>>>> FF >>>>>>> >>>>>>> -- >>>>>>> @flaper87 >>>>>>> Flavio Percoco >>>>>>> >>>>>>> _______________________________________________ >>>>>>> OpenStack-dev mailing list >>>>>>> OpenStack-dev@lists.openstack.org >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> OpenStack-dev mailing list >>>>>> OpenStack-dev@lists.openstack.org >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev@lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> OpenStack-dev mailing list >>>>> OpenStack-dev@lists.openstack.org >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>> >>> > _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev