Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-22 Thread Saravanan KR
+1

Regards,
Saravanan KR
On Fri, Oct 19, 2018 at 5:53 PM Juan Antonio Osorio Robles
 wrote:
>
> Hello!
>
>
> I would like to propose Bob Fournier (bfournie) as a core reviewer in
> TripleO. His patches and reviews have spanned quite a wide range in our
> project, his reviews show great insight and quality and I think he would
> be a addition to the core team.
>
> What do you folks think?
>
>
> Best Regards
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Automating role generation

2018-09-25 Thread Saravanan KR
Mutually exclusive services should be decided based on the used
environment files rather than by the list of the services on a role.
For example, ComputeSriov role can be deployed with ml2-ovs or ml2-odl
or ml2-ovn based on the environment file used for the deploy command.
Looking forward.

Regards,
Saravanan KR
On Sat, Sep 22, 2018 at 4:52 AM Janki Chhatbar  wrote:
>
> Hi All
>
> As per the discussion at PTG, I have filed a BP [1]. I will push a spec 
> sometime around mid-October.
>
> [1]. https://blueprints.launchpad.net/tripleo/+spec/automatic-role-generation
>
> On Tue, Sep 4, 2018 at 2:56 PM Steven Hardy  wrote:
>>
>> On Tue, Sep 4, 2018 at 9:48 AM, Jiří Stránský  wrote:
>> > On 4.9.2018 08:13, Janki Chhatbar wrote:
>> >>
>> >> Hi
>> >>
>> >> I am looking to automate role file generation in TripleO. The idea is
>> >> basically for an operator to create a simple yaml file (operator.yaml,
>> >> say)
>> >> listing services that are needed and then TripleO to generate
>> >> Controller.yaml enabling only those services that are mentioned.
>> >>
>> >> For example:
>> >> operator.yaml
>> >> services:
>> >>  Glance
>> >>  OpenDaylight
>> >>  Neutron ovs agent
>> >
>> >
>> > I'm not sure it's worth introducing a new file format as such, if the
>> > purpose is essentially to expand e.g. "Glance" into
>> > "OS::TripleO::Services::GlanceApi" and
>> > "OS::TripleO::Services::GlanceRegistry"? It would be another layer of
>> > indirection (additional mental work for the operator who wants to 
>> > understand
>> > how things work), while the layer doesn't make too much difference in
>> > preparation of the role. At least that's my subjective view.
>> >
>> >>
>> >> Then TripleO should
>> >> 1. Fail because ODL and OVS agent are either-or services
>> >
>> >
>> > +1 i think having something like this would be useful.
>> >
>> >> 2. After operator.yaml is modified to remove Neutron ovs agent, it should
>> >> generate Controller.yaml with below content
>> >>
>> >> ServicesDefault:
>> >> - OS::TripleO::Services::GlanceApi
>> >> - OS::TripleO::Services::GlanceRegistry
>> >> - OS::TripleO::Services::OpenDaylightApi
>> >> - OS::TripleO::Services::OpenDaylightOvs
>> >>
>> >> Currently, operator has to manually edit the role file (specially when
>> >> deployed with ODL) and I have seen many instances of failing deployment
>> >> due
>> >> to variations of OVS, OVN and ODL services enabled when they are actually
>> >> exclusive.
>> >
>> >
>> > Having validations on the service list would be helpful IMO, e.g. "these
>> > services must not be in one deployment together", "these services must not
>> > be in one role together", "these services must be together", "we recommend
>> > this service to be in every role" (i'm thinking TripleOPackages, Ntp, ...)
>> > etc. But as mentioned above, i think it would be better if we worked
>> > directly with the "OS::TripleO::Services..." values rather than a new layer
>> > of proxy-values.
>> >
>> > Additional random related thoughts:
>> >
>> > * Operator should still be able to disobey what the validation suggests, if
>> > they decide so.
>> >
>> > * Would be nice to have the info about particular services (e.g what can't
>> > be together) specified declaratively somewhere (TripleO's favorite thing in
>> > the world -- YAML?).
>> >
>> > * We could start with just one type of validation, e.g. the mutual
>> > exclusivity rule for ODL vs. OVS, but would be nice to have the solution
>> > easily expandable for new rule types.
>>
>> This is similar to how the UI uses the capabilities-map.yaml, so
>> perhaps we can use that as the place to describe service dependencies
>> and conflicts?
>>
>> https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml
>>
>> Currently this isn't used at all for the CLI, but I can imagine some
>> kind of wizard interface being useful, e.g you could say enable
>> "Glance" group and it'd automatically pull in all glance dependencies?
>>
>> Anothe

Re: [openstack-dev] [tripleo] VFs not configured in SR-IOV role

2018-09-07 Thread Saravanan KR
Not sure which version you are using, but the service
"OS::TripleO::Services::NeutronSriovHostConfig" is responsible for
setting up VFs. Check if this service is enabled in the deployment.
One of the missing place is being fixed -
https://review.openstack.org/#/c/597985/

Regards,
Saravanan KR
On Tue, Sep 4, 2018 at 8:58 PM Samuel Monderer
 wrote:
>
> Hi,
>
> Attached is the used to deploy an overcloud with SR-IOV role.
> The deployment completed successfully but the VFs aren't configured on the 
> host.
> Can anyone have a look at what I missed.
>
> Thanks
> Samuel
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] What is the proper way to use NetConfigDataLookup?

2018-07-08 Thread Saravanan KR
Are you using the first-boot script [1] mapped to NodeUserData? If
yes, you could check the logs/error of the first-boot script
@/var/log/cloud-init-output.log on the overcloud nodes.

Regards,
Saravanan KR

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/e64c10b9c13188f37e6f122475fe02280eaa6686/firstboot/os-net-config-mappings.yaml
On Fri, Jul 6, 2018 at 9:53 PM Mark Hamzy  wrote:
>
> What is the proper way to use NetConfigDataLookup?  I tried the following:
>
> (undercloud) [stack@oscloud5 ~]$ cat << '__EOF__' > 
> ~/templates/mapping-info.yaml
> parameter_defaults:
>   NetConfigDataLookup:
>   control1:
> nic1: '5c:f3:fc:36:dd:68'
> nic2: '5c:f3:fc:36:dd:6c'
> nic3: '6c:ae:8b:29:27:fa' # 9.114.219.34
> nic4: '6c:ae:8b:29:27:fb' # 9.114.118.???
> nic5: '6c:ae:8b:29:27:fc'
> nic6: '6c:ae:8b:29:27:fd'
>   compute1:
> nic1: '6c:ae:8b:25:34:ea' # 9.114.219.44
> nic2: '6c:ae:8b:25:34:eb'
> nic3: '6c:ae:8b:25:34:ec' # 9.114.118.???
> nic4: '6c:ae:8b:25:34:ed'
>   compute2:
> nic1: '00:0a:f7:73:3c:c0'
> nic2: '00:0a:f7:73:3c:c1'
> nic3: '00:0a:f7:73:3c:c2' # 9.114.118.156
> nic4: '00:0a:f7:73:3c:c3' # 9.114.112.???
> nic5: '00:0a:f7:73:73:f4'
> nic6: '00:0a:f7:73:73:f5'
> nic7: '00:0a:f7:73:73:f6' # 9.114.219.134
> nic8: '00:0a:f7:73:73:f7'
> __EOF__
> (undercloud) [stack@oscloud5 ~]$ openstack overcloud deploy --templates -e 
> ~/templates/node-info.yaml -e ~/templates/mapping-info.yaml -e 
> ~/templates/overcloud_images.yaml -e 
> ~/templates/environments/network-environment.yaml -e 
> ~/templates/environments/network-isolation.yaml -e 
> ~/templates/environments/config-debug.yaml --disable-validations --ntp-server 
> pool.ntp.org --control-scale 1 --compute-scale
>
> But I did not see a /etc/os-net-config/mapping.yaml get created.
>
> Also is this configuration used when the system boots IronicPythonAgent to 
> provision the disk?
>
> --
> Mark
>
> You must be the change you wish to see in the world. -- Mahatma Gandhi
> Never let the future disturb you. You will meet it, if you have to, with the 
> same weapons of reason which today arm you against the present. -- Marcus 
> Aurelius
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-13 Thread Saravanan KR
+1

Regards,
Saravanan KR

On Wed, Jun 13, 2018 at 9:20 PM, Emilien Macchi  wrote:
> Alan Bishop has been highly involved in the Storage backends integration in
> TripleO and Puppet modules, always here to update with new features, fix
> (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
>
> Please vote -1/+1,
> Thanks!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Using derive parameters workflow for FixedIPs

2018-05-24 Thread Saravanan KR
As discussed in the IRC over , here is the outline:

* Derive parameters workflow could be used for deriving FixedIPs
parameters also (started as part of the review
https://review.openstack.org/#/c/569818/)
* Above derivation should be done for all the deployments, so invoking
of derive parameters should be brought out side the "-p" option check
* But still invoking the NFV and HCI formulas should be based on the
user option. Either add a condition by using the existing
workflow_parameter of the feature [or] introduce a workflow_parameter
to control the user preference
* In the derive params workflow, we need to bring in the separation on
whether, we need introspection data or not. Based on user preference
and feature presence, add checks to see if introspection data is
required. If we don't do this, then introspection will be become
mandatory for all deployments.
* Merging of parameter will be same as existing with preference to the
user provided parameters

Future Enhancement

* Instead of using plan-environment.yaml, write the derived parameters
to a separate environment file, add add it to environments list of
plan-environment.yaml to allow heat merging to work
https://review.openstack.org/#/c/448209

Regards,
Saravanan KR

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] cannot configure host kernel-args for pci passthrough with first-boot

2018-05-21 Thread Saravanan KR
Could you check the log in the /var/log/cloud-init-output.log file to
see what are the first-boot scripts which are executed on the node?
Add "set -x" in the kernel-args.sh file to better logs.

Regards,
Saravanan KR

On Tue, May 22, 2018 at 12:49 AM, Samuel Monderer
<smonde...@vasonanetworks.com> wrote:
> Hi,
>
> I'm trying to build a new OS environment with RHOSP 11 with a compute has
> that has GPU card.
> I've added a new role and a firstboot template to configure the kernel args
> to allow pci-passthrough.
> For some reason the firstboot is not working (can't see the changes on the
> compute node)
> Attached are the templates I used to deploy the environment.
>
> I used the same configuration I used for a compute role with sr-iov and it
> worked there.
> Could someone tell me what I missed?
>
> Regards,
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][nova][neutron] changing the default qemu group in tripleo

2018-02-11 Thread Saravanan KR
Hello,

With OvS2.8, the USER and GROUP in which ovs will run has been changed
to openvswitch:openvswitch (for regular ovs builds) and
openvswitch:hugetlbfs (for DPDK enable ovs builds). Since for fedora
family, we have always DPDK enabled builds, all TripleO deployments
will have OvS running with openvswitch:hugetlbfs.

For DPDK, qemu should also run with the same group "hugetlbfs" so that
the vhost sockets could be shared between qemu and openvswitch. So we
are making the change to set "group" in /etc/libvirt/qemu.conf to
"hugetlbfs" for DPDK deployments. And it is all working fine.

Now the question is - should we make qemu run with same group for all
the nodes of the deployment [or] only the nodes which have DPDK
enabled?

It is possible for the DPDK nodes to host non-DPDK VMs (like SR-IOV or
regular tenant VMs). So all VMs will be running with "qemu:hugetlbfs"
user and group. So to avoid conflicts of running different group on
different roles of a TripleO deployment, I prefer to update the qemu
group as "hugetlbfs" for all the nodes of all roles, if DPDK is
enabled in the deployment.

Let us know if you see any issues on this approach?

Regards,
Saravanan KR

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] configuring qemu.conf using puppet or ansible

2017-11-28 Thread Saravanan KR
On Fri, Nov 24, 2017 at 10:09 PM, Alex Schultz <aschu...@redhat.com> wrote:
> On Fri, Nov 24, 2017 at 5:03 AM, Saravanan KR <skram...@redhat.com> wrote:
>> Hello,
>>
>> For dpdk in ovs2.8, the default permission of vhost user ports are
>> modified from root:root  to openvswitch:hugeltbfs. The vhost user
>> ports are shared between ovs and libvirt (qemu). More details on BZ
>> [1].
>>
>> The "group" option in /etc/libvirt/qemu.conf [2] need to set as
>> "hugetlbfs" for vhost port to be shared between ovs and libvirt. In
>> order to configure qemu.conf, I could think of multiple options:
>>
>> * By using puppet-libvirt[3] module, but this module is altering lot
>> of configurations on the qemu.conf as it is trying to rewrite the
>> complete qemu.conf file. It may be different version of conf file
>> altogether as we might override the package defaults, depending on the
>> package version used.
>>
>
> We currently do not use puppet-libvirt and qemu settings are managed
> via puppet-nova with augeas[0][1].

Thanks Alex for this pointer.
>
>> * Other possibility is to configure the qemu.conf file directly using
>> the "init_setting" module like [4].
>>
>> * Considering the move towards ansible, I would prefer if we can add
>> ansible based configuration along with docker-puppet for any new
>> modules going forward. But I am not sure of the direction.
>>
>
> So you could use ansible provided that the existing settings are not
> managed via another puppet module. The problem with mixing both puppet
> and ansible is ensuring that only one owns the thing being touched.
> Since we use augeas in puppet-nova, this should not conflict with the
> usage of ini_setting with ansible.  Unfortunately libvirt is not
> currently managed as a standalone service so perhaps it's time to
> evaluate how we configure it since multiple services (nova/ovs) need
> to factor into it's configuration.
>
I was under the assumption that a new puppet module need to be
included for it, which made me to drift towards ansible. Since we are
configuring qemu.conf via puppet-nova (hiera data), I don't want to
create an intermediary step with ansible ini_setting, as the final
goal would be to create ansible-role-k8s-novalibvirt, which will
configure qemu.conf. I will stick to the existing puppet approach,
unless I find a solid reason to switch.

Regards,
Saravanan KR

> Thanks,
> -Alex
>
> [0] 
> https://github.com/openstack/puppet-nova/blob/30f9d47ec43519599f63f8a6f8da43b7dcb86242/manifests/compute/libvirt/qemu.pp
> [1] 
> https://github.com/openstack/puppet-nova/blob/9b98e3b0dee5f103c9fa32b37ff1a29df4296957/manifests/migration/qemu.pp
>
>> Prefer the feedback before proceeding with an approach.
>>
>> Regards,
>> Saravanan KR
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1515269
>> [2]  https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu.conf#L412
>> [3] https://github.com/thias/puppet-libvirt
>> [4] https://review.openstack.org/#/c/522796/1/manifests/profile/base/dpdk.pp
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] configuring qemu.conf using puppet or ansible

2017-11-24 Thread Saravanan KR
Hello,

For dpdk in ovs2.8, the default permission of vhost user ports are
modified from root:root  to openvswitch:hugeltbfs. The vhost user
ports are shared between ovs and libvirt (qemu). More details on BZ
[1].

The "group" option in /etc/libvirt/qemu.conf [2] need to set as
"hugetlbfs" for vhost port to be shared between ovs and libvirt. In
order to configure qemu.conf, I could think of multiple options:

* By using puppet-libvirt[3] module, but this module is altering lot
of configurations on the qemu.conf as it is trying to rewrite the
complete qemu.conf file. It may be different version of conf file
altogether as we might override the package defaults, depending on the
package version used.

* Other possibility is to configure the qemu.conf file directly using
the "init_setting" module like [4].

* Considering the move towards ansible, I would prefer if we can add
ansible based configuration along with docker-puppet for any new
modules going forward. But I am not sure of the direction.

Prefer the feedback before proceeding with an approach.

Regards,
Saravanan KR

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1515269
[2]  https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu.conf#L412
[3] https://github.com/thias/puppet-libvirt
[4] https://review.openstack.org/#/c/522796/1/manifests/profile/base/dpdk.pp

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Request for input: scaling the number of Ceph clusters deployed in the overcloud

2017-11-21 Thread Saravanan KR
We had a similar kind of requirement to differentiate parameters
between overcloud compute nodes, like a cluster having DELL and HP
machines have different hardware layout, but DPDK requires the
specific CPU information of a hardware layout to function effectively.

We addressed it  by using different roles and role-specific[1]
parameters. There will be 2 roles for compute: ComputeOvsDpdkHP and
ComputeOvsDpdkDell. And using role-specific parameters, the parameters
are targeted to the specific role of a service. The dpdk service files
[2] uses this format to merge the parameters.

Regards,
Saravanan KR

[1] 
https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/role_specific_parameters.html
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/neutron-ovs-dpdk-agent.yaml#L59

On Tue, Nov 21, 2017 at 4:46 PM, Giulio Fidente <gfide...@redhat.com> wrote:
> Hi,
>
> we're currently exploring ways to deploy multiple Ceph clusters in the
> overcloud.
>
> Given Ceph is now managed by a ceph-ansible playbook, we can "easily"
> deploy multiple Ceph clusters running multiple times the playbook with
> different parameters and inventory.
>
>
> The initial idea to make this consumable in TripleO has been to have
> jinja add a prefix to the Ceph service names and its parameters, and let
> the user build custom roles (deploying on each a different instance of
> the Ceph service) to distribute the Ceph services as needed on any
> arbitrary role.
>
> The benefits of the above approach are that daemons of different Ceph
> clusters can be colocated on the same node and that operators continue
> to customize any Ceph parameter using heat environment files as they
> used to (they just add the jinja prefix to the parameter name).
>
> The cons are that we'd need to scale (hence use jinja) also for other
> services, like Cinder or Nova because the Ceph parameters can be
> consumed by those too.
>
>
> An alternate proposal has been to tag the roles, bound the Ceph cluster
> to a tag to build the inventory and use role-specific settings so that
> instances of the Ceph services deployed on a role would get different
> parameters based on the role they run on.
>
> The most important benefit that I can see of the above approach is that
> it is a lot less intrusive as it does not require jinja processing of
> the templates but I think I do not understand fully how the
> implementation would look like so I was curious if there are examples in
> tree of anything similar?
>
> I would also like to know if other people is interested in this same
> functionality so that we can come up with a more generalized solution?
>
> Last but not least, I would like to hear more input, ideas and feedback
> to see if there are more ways of doing this!
>
> Thanks for the feedback
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Composable role OVS-DPDK compute node with single NIC

2017-11-21 Thread Saravanan KR
On Wed, Nov 22, 2017 at 2:29 AM, Ben Nemec <openst...@nemebean.com> wrote:
> Thanks.  Unfortunately I don't see anything obviously wrong with that, but
> I'm not a DPDK expert either.  Hopefully one of our networking gurus can
> chime in and comment on whether this should work.
>
> On 11/21/2017 02:01 PM, Samuel Monderer wrote:
>>
>> http://paste.openstack.org/show/626557/
>>
>> On Tue, Nov 21, 2017 at 8:22 PM Ben Nemec <openst...@nemebean.com
>> <mailto:openst...@nemebean.com>> wrote:
>>
>> Your configuration lost all of its indentation, which makes it
>> extremely
>> difficult to read.  Can you try sending it a different way, maybe
>> paste.openstack.org <http://paste.openstack.org>?
>>
>>
>> On 11/16/2017 02:43 AM, Samuel Monderer wrote:
>>  > Hi,
>>  >
>>  > I managed to deploy a compute node with ovs-dpdk using two NICs.
>> The
>>  > first for the provisioning network and control plane, the other
>> NIC is
>>  > used tenant network over ovs-dpdk.
>>  >
>>  > I then tried to use only a single nic for provisioning and
>> ovs-dpdk.
This is not a recommended way of using DPDK.

>>  > I used the nic configuration below for the compute nodes running
>>  > ovs-dpdk but encountered two problems.
>>  > First the tenant network was working (wasn't able to get DHCP
>> running
>>  > and even when I manually configured it wasn't able to reach the
>> router)
I am assuming that your dpdk ports are active (verified by ovs-vsctl
show command). The only usecases that we have validated is using DPDK
as provider network and DPDK as Tenant network [1] which will have the
tenant and vlan configured directly on the bridge (this configuration
was recommended from the ovs team).

>>  > Second the default route on control plane is not set even though
>> it is
>>  > configured in /etc/sysconfig/network-scripts/route-br-ex
>>  >
I am not sure if there is any difference between system and netdev
bridges in setting the routes, but you could validate by adding logs
to the ifup scripts and trace it out why the route is not invoked.

Regards,
Saravanan KR

[1] http://paste.openstack.org/show/627030/

>>  > Samuel
>>  >
>>  > OsNetConfigImpl:
>>  > type: OS::Heat::StructuredConfig
>>  > properties:
>>  > group: os-apply-config
>>  > config:
>>  > os_net_config:
>>  > network_config:
>>  > -
>>  > type: ovs_user_bridge
>>  > name: {get_input: bridge_name}
>>  > use_dhcp: false
>>  > dns_servers: {get_param: DnsServers}
>>  > addresses:
>>  > -
>>  > ip_netmask:
>>  > list_join:
>>  > - '/'
>>  > - - {get_param: ControlPlaneIp}
>>  > - {get_param: ControlPlaneSubnetCidr}
>>  > routes:
>>  > -
>>  > ip_netmask: 169.254.169.254/32 <http://169.254.169.254/32>
>> <http://169.254.169.254/32>
>>  > next_hop: {get_param: EC2MetadataIp}
>>  > -
>>  > default: true
>>  > next_hop: {get_param: ControlPlaneDefaultRoute}
>>  > members:
>>  > -
>>  > type: ovs_dpdk_port
>>  > name: dpdk0
>>  > members:
>>  > -
>>  > type: interface
>>  > name: nic1
>>  > -
>>  > type: vlan
>>  > vlan_id: {get_param: InternalApiNetworkVlanID}
>>  > addresses:
>>  > -
>>  > ip_netmask: {get_param: InternalApiIpSubnet}
>>  > -
>>  > type: vlan
>>  > vlan_id: {get_param: TenantNetworkVlanID}
>>  > addresses:
>>  > -
>>  > ip_netmask: {get_param: TenantIpSubnet}
>>  >
>>  >
>>  >
>>  >
>>
>> __
>>  > OpenStack Development Mailing List (not for usage questions)
>>  > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>>  > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>  >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Tagging Parameters

2017-11-16 Thread Saravanan KR
Hi,

A new attribute "tags" has been added to the template parameters in
heat [1]. By adding this, we can categorize the parameters based on
features, which can be used for validations. Currently, I am working
on a patch [2] which will add the "role_specific" tag to the
parameters, which are accepted as role-specific. The next step is to
add a validation to role-specific parameters.

Regards,
Saravanan KR

[1] https://review.openstack.org/#/c/506133/
[2] https://review.openstack.org/#/c/517231/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Next steps for pre-deployment workflows (e.g derive parameters)

2017-11-14 Thread Saravanan KR
As discussed in IRC, I have collated all the important discussions to
the etherpad (gdoc was not publicly shareable).

https://etherpad.openstack.org/p/tripleo-derive-parameters-v2

Lets continue discussion on the etherpad to finalize.

Regards,
Saravanan KR

On Thu, Nov 9, 2017 at 11:05 AM, Saravanan KR <skram...@redhat.com> wrote:
> On Thu, Nov 9, 2017 at 2:57 AM, Jiri Tomasek <jtoma...@redhat.com> wrote:
>>
>>
>> On Wed, Nov 8, 2017 at 6:09 AM, Steven Hardy <sha...@redhat.com> wrote:
>>>
>>> Hi all,
>>>
>>> Today I had a productive hallway discussion with jtomasek and
>>> stevebaker re $subject, so I wanted to elaborate here for the benefit
>>> of those folks not present.  Hopefully we can get feedback on the
>>> ideas and see if it makes sense to continue and work on some patches:
>>>
>>> The problem under discussion is how do we run pre-deployment workflows
>>> (such as those integrated recently to calculate derived parameters,
>>> and in future perhaps also those which download container images etc),
>>> and in particular how do we make these discoverable via the UI
>>> (including any input parameters).
>>>
>>> The idea we came up with has two parts:
>>>
>>> 1. Add a new optional section to roles_data for services that require
>>> pre-deploy workflows
>>>
>>> E.g something like this:
>>>
>>>  pre_deploy_workflows:
>>> - derive_params:
>>>   workflow_name:
>>> tripleo.derive_params_formulas.v1.dpdk_derive_params
>>>   inputs:
>>>   ...
>>>
>>> This would allow us to associate a specific mistral workflow with a
>>> given service template, and also work around the fact that currently
>>> mistral inputs don't have any schema (only key/value input) as we
>>> could encode the required type and any constraints in the inputs block
>>> (clearly this could be removed in future should typed parameters
>>> become available in mistral).
>>>
>>> 2. Add a new workflow that calculates the enabled services and returns
>>> all pre_deploy_workflows
>>>
>>> This would take all enabled environments, then use heat to validate
>>> the configuration and return the merged resource registry (which will
>>> require https://review.openstack.org/#/c/509760/), then we would
>>> iterate over all enabled services in the registry and extract a given
>>> roles_data key (e.g pre_deploy_workflows)
>>>
>>> The result of the workflow would be a list of all pre_deploy_workflows
>>> for all enabled services, which the UI could then use to run the
>>> workflows as part of the pre-deploy process.
>>
>>
>> As I think about this more, we may find out that matching a service to
>> workflow is not enough as workflow may require several services (together
>> defining a feature) So maybe doing it in separate file would help. E.g.
>>
>> pre-deploy-workflows.yaml
>> - name: my.workflow
>>   services: a, b, c, d
>>
>> Maybe there is a better way, maybe this is not even needed. I am not sure.
>> What do you think?
>
> Currently, HCI derive parameters workflow is invoked if a role has
> both NovaCompute and CephOSD services enabled.
>
>>
>>
>> What I really like about this proposal is that it provides a standard way to
>> configure deployment features and provides clear means to add additional
>> such configurations.
>>
>> The resulting deployment configuration steps in GUI would look following:
>>
>> 1/ Hardware (reg. nodes, introspect etc)
>>
>> 2/ High level deployment configuration (basically selecting additional
>> environment files)
>>
>> 3/ Roles management (Roles selection, roles -> nodes assignment, roles
>> configuration - setting roles_data properties)
>>
>> 4/ Network configuration -  network configuration wizard: (I'll describe
>> this in separate email)
>>
>> 5/ Deployment Features configuration (This proposal) - a list of features to
>> configure, the list is nicely generated from information provided in
>> previous steps, user has all the information to configure those features at
>> hand and can go through these step by step.
>
> Agreed on the UI workflow.
>
> For DPDK and SR-IOV, there are common host specific parameters to be
> derived. It has been added as a separate host-specific parameters
> workflow. And both DPDK and SR-IOV workflow execution should follow
> host-spe

Re: [openstack-dev] [tripleo] Proposing John Fulton core on TripleO

2017-11-08 Thread Saravanan KR
+1

Regards,
Saravanan KR

On Thu, Nov 9, 2017 at 11:40 AM, Emilien Macchi <emil...@redhat.com> wrote:
> Of course +1, thanks for your hard work! Stay awesome.
> ---
> Emilien Macchi
>
> On Nov 9, 2017 4:58 PM, "Marios Andreou" <mandr...@redhat.com> wrote:
>>
>>
>>
>> On Thu, Nov 9, 2017 at 12:24 AM, Giulio Fidente <gfide...@redhat.com>
>> wrote:
>>>
>>> Hi,
>>>
>>> I would like to propose John Fulton core on TripleO.
>>>
>>> I think John did an awesome work during the Pike cycle around the
>>> integration of ceph-ansible as a replacement for puppet-ceph, for the
>>> deployment of Ceph in containers.
>>>
>>> I think John has good understanding of many different parts of TripleO
>>> given that the ceph-ansible integration has been a complicated effort
>>> involving changes in heat/tht/mistral workflows/ci and last but not
>>> least, docs and he is more recently getting busier with reviews outside
>>> his main comfort zone.
>>>
>>> I am sure John would be a great addition to the team and I welcome him
>>> first to tune into radioparadise with the rest of us when joining
>>> #tripleo
>>>
>>> Feedback is welcomed!
>>
>>
>> +1
>>
>>>
>>> --
>>> Giulio Fidente
>>> GPG KEY: 08D733BA
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Next steps for pre-deployment workflows (e.g derive parameters)

2017-11-08 Thread Saravanan KR
On Thu, Nov 9, 2017 at 2:57 AM, Jiri Tomasek <jtoma...@redhat.com> wrote:
>
>
> On Wed, Nov 8, 2017 at 6:09 AM, Steven Hardy <sha...@redhat.com> wrote:
>>
>> Hi all,
>>
>> Today I had a productive hallway discussion with jtomasek and
>> stevebaker re $subject, so I wanted to elaborate here for the benefit
>> of those folks not present.  Hopefully we can get feedback on the
>> ideas and see if it makes sense to continue and work on some patches:
>>
>> The problem under discussion is how do we run pre-deployment workflows
>> (such as those integrated recently to calculate derived parameters,
>> and in future perhaps also those which download container images etc),
>> and in particular how do we make these discoverable via the UI
>> (including any input parameters).
>>
>> The idea we came up with has two parts:
>>
>> 1. Add a new optional section to roles_data for services that require
>> pre-deploy workflows
>>
>> E.g something like this:
>>
>>  pre_deploy_workflows:
>> - derive_params:
>>   workflow_name:
>> tripleo.derive_params_formulas.v1.dpdk_derive_params
>>   inputs:
>>   ...
>>
>> This would allow us to associate a specific mistral workflow with a
>> given service template, and also work around the fact that currently
>> mistral inputs don't have any schema (only key/value input) as we
>> could encode the required type and any constraints in the inputs block
>> (clearly this could be removed in future should typed parameters
>> become available in mistral).
>>
>> 2. Add a new workflow that calculates the enabled services and returns
>> all pre_deploy_workflows
>>
>> This would take all enabled environments, then use heat to validate
>> the configuration and return the merged resource registry (which will
>> require https://review.openstack.org/#/c/509760/), then we would
>> iterate over all enabled services in the registry and extract a given
>> roles_data key (e.g pre_deploy_workflows)
>>
>> The result of the workflow would be a list of all pre_deploy_workflows
>> for all enabled services, which the UI could then use to run the
>> workflows as part of the pre-deploy process.
>
>
> As I think about this more, we may find out that matching a service to
> workflow is not enough as workflow may require several services (together
> defining a feature) So maybe doing it in separate file would help. E.g.
>
> pre-deploy-workflows.yaml
> - name: my.workflow
>   services: a, b, c, d
>
> Maybe there is a better way, maybe this is not even needed. I am not sure.
> What do you think?

Currently, HCI derive parameters workflow is invoked if a role has
both NovaCompute and CephOSD services enabled.

>
>
> What I really like about this proposal is that it provides a standard way to
> configure deployment features and provides clear means to add additional
> such configurations.
>
> The resulting deployment configuration steps in GUI would look following:
>
> 1/ Hardware (reg. nodes, introspect etc)
>
> 2/ High level deployment configuration (basically selecting additional
> environment files)
>
> 3/ Roles management (Roles selection, roles -> nodes assignment, roles
> configuration - setting roles_data properties)
>
> 4/ Network configuration -  network configuration wizard: (I'll describe
> this in separate email)
>
> 5/ Deployment Features configuration (This proposal) - a list of features to
> configure, the list is nicely generated from information provided in
> previous steps, user has all the information to configure those features at
> hand and can go through these step by step.

Agreed on the UI workflow.

For DPDK and SR-IOV, there are common host specific parameters to be
derived. It has been added as a separate host-specific parameters
workflow. And both DPDK and SR-IOV workflow execution should follow
host-specific workflow.
In case of DPDK and HCI in same role, it is expected that DPDK
workflow is executed before HCI. And the service configuration should
provide this order to UI.
I am not able to realize how this information will be provided and
processed in UI with user. Do you have a UI wire frame for this
workflow?

>
> 6/ Advanced deployment config - a view providing a way to review
> Environment/Roles/Services parameters, search and tweak them if needed.
>
> 7/ Deploy.
>
> I believe these steps should cover anything we should need to do for
> deployment configuration.
>
> -- Jirka
>
>
>>
>>
>> If this makes sense I can go ahead and push some patches so we can
>> iterate on the i

Re: [openstack-dev] [TripleO] Next steps for pre-deployment workflows (e.g derive parameters)

2017-11-08 Thread Saravanan KR
Thanks Steven for the update.

Current CLI flow:
--
* User need to add -p parameter for the overcloud deploy command with
workflows to be invoked [1]
* Plan will update updated to the swift container
* Derived parameters workflow is initiated
- For each role
* Get the introspection data of first node assigned to the role
* Find the list features based on the services or parameters
* If dpdk present, run dpdk formulas workflow
* if sriov is present, run sriov formulas workfow (under development)
* if sriov or dpdk is present, run host formulas workflow
* if hci present, run hci formulas workflow

Here the order of the formulas workflow invocation is important. For
example,  in Compute-DPDK-HCI role, HCI formulas should exclude the
CPUs allocated for DPDK PMD threads, while calculating cpu allocation
ratio.

I am trying to understand the proposed changes. Is it for assisting UI
only or changing the existing CLI flow too? If the idea is to invoke
the individual formulas workflow, it will not be possible with
existing implementation, need to be re-worked. We need to introduce
order for formulas workflow and direct fetching and merging of derived
parameters in plan.

As per earlier discussion jtomasek, to invoke derived parameters
workflow (existing) for a plan, UI requires following information:
* Whether derived parameters should be invoked for this deployment
(based on roles and enabled services)
* If yes, list of parameters, its types, and its default values (and
choices if present), are required

Did I miss anything?

Regards,
Saravanan KR

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/master/plan-samples/plan-environment-derived-params.yaml

On Wed, Nov 8, 2017 at 2:39 PM, Bogdan Dobrelya <bdobr...@redhat.com> wrote:
> On 11/8/17 6:09 AM, Steven Hardy wrote:
>>
>> Hi all,
>>
>> Today I had a productive hallway discussion with jtomasek and
>> stevebaker re $subject, so I wanted to elaborate here for the benefit
>> of those folks not present.  Hopefully we can get feedback on the
>> ideas and see if it makes sense to continue and work on some patches:
>>
>> The problem under discussion is how do we run pre-deployment workflows
>> (such as those integrated recently to calculate derived parameters,
>> and in future perhaps also those which download container images etc),
>> and in particular how do we make these discoverable via the UI
>> (including any input parameters).
>>
>> The idea we came up with has two parts:
>>
>> 1. Add a new optional section to roles_data for services that require
>> pre-deploy workflows
>>
>> E.g something like this:
>>
>>   pre_deploy_workflows:
>>  - derive_params:
>>workflow_name:
>> tripleo.derive_params_formulas.v1.dpdk_derive_params
>>inputs:
>>...
>>
>> This would allow us to associate a specific mistral workflow with a
>> given service template, and also work around the fact that currently
>> mistral inputs don't have any schema (only key/value input) as we
>> could encode the required type and any constraints in the inputs block
>> (clearly this could be removed in future should typed parameters
>> become available in mistral).
>>
>> 2. Add a new workflow that calculates the enabled services and returns
>> all pre_deploy_workflows
>>
>> This would take all enabled environments, then use heat to validate
>> the configuration and return the merged resource registry (which will
>> require https://review.openstack.org/#/c/509760/), then we would
>> iterate over all enabled services in the registry and extract a given
>> roles_data key (e.g pre_deploy_workflows)
>>
>> The result of the workflow would be a list of all pre_deploy_workflows
>> for all enabled services, which the UI could then use to run the
>> workflows as part of the pre-deploy process.
>>
>> If this makes sense I can go ahead and push some patches so we can
>> iterate on the implementation?
>
>
> I apologise for a generic/non-techy comment: it would be nice to keep
> required workflows near the services' definition templates, to keep it as
> much self-contained as possible. IIUC, that's covered by #1.
> For future steps, I'd like to see all of the "bulk processing" to sit in
> those templates as well.
>
>>
>> Thanks,
>>
>> Steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.opensta

Re: [openstack-dev] [tripleo][networking] Organizing the networking squad

2017-10-16 Thread Saravanan KR
Thanks for initiating Brent. Yes, I would participate in the networking squad.

Regards,
Saravanan KR

On Mon, Oct 16, 2017 at 6:43 PM, Brent Eagles <beag...@redhat.com> wrote:
> Hi,
>
> On Tue, Oct 10, 2017 at 10:55 AM, Brent Eagles <beag...@redhat.com> wrote:
>>
>> Hi all,
>>
>> The list of TripleO squads includes a "networking squad". In previous
>> cycles, coordinating outside of IRC and email conversations seemed
>> unnecessary as there were only a few contributors and a small number of
>> initiatives. However, with future container related work, increased usage of
>> ansible, ongoing efforts like routed networks and NFV, os-net-config related
>> issues, and the increasing number of backends and networking related
>> services being added to TripleO, the world of TripleO networking seems
>> increasingly busy. I propose that we start organizing properly with periodic
>> sync-ups and coordinating efforts via etherpad (or similar) as well as
>> reporting into the weekly tripleo meeting.
>>
>> Cheers,
>>
>> Brent
>
>
> This was initially not directed at anyone in particular but I've added
> possible interested parties to this thread in case it gets lost in the
> noise! Please reply if you are interested in participating in the networking
> squad. Proposed first orders of business are:
>
>  - establish the squad's scope
>  - agree on whether we need a scheduled sync up meeting and if so, sort out
> a meeting time time
>  - outline initial areas of interest and concern and action items
>
> Cheers,
>
> Brent
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Configure SR-IOV VFs in tripleo

2017-10-03 Thread Saravanan KR
On Tue, Sep 26, 2017 at 3:37 PM, Moshe Levi <mosh...@mellanox.com> wrote:
> Hi  all,
>
>
>
> While working on tripleo-ovs-hw-offload work, I encounter the following
> issue with SR-IVO.
>
>
>
> I added -e ~/heat-templates/environments/neutron-sriov.yaml -e
> ~/heat-templates/environments/host-config-and-reboot.yaml to the
> overcloud-deploy.sh.
>
> The computes nodes are configure with the intel_iommu=on kernel option and
> the computes are reboot as expected,
>
> than the tripleo::host::sriov will create /etc/sysconfig/allocate_vfs to
> configure the SR-IOV VF. It seem it requires additional reboot for the
> SR-IOV VFs to be created. Is that expected behavior? Am I doing something
> wrong?

The file allocate_vfs is required for the subsequent reboots, but
during the deployment, the vfs are created by puppet-tripleo [1]. No
additional reboot required for creating VFs.

Regards,
Saravanan KR

[1] 
https://github.com/openstack/puppet-tripleo/blob/master/manifests/host/sriov.pp#L19

>
>
>
>
>
>
>
>
>
> [1]
> https://github.com/openstack/puppet-tripleo/blob/80e646ff779a0f8e201daec0c927809224ed5fdb/manifests/host/sriov.pp
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] mismatch of user/group id between ovs (baremetal) and libvirt (container)

2017-08-23 Thread Saravanan KR
On Wed, Aug 23, 2017 at 4:28 AM, Oliver Walsh <owa...@redhat.com> wrote:
> Hi,
>
>>   sed -i 's/Group=qemu/Group=42427/'
>> /usr/lib/systemd/system/ovs-vswitchd.service
>
> Can't this be overridden via /etc/systemd/system/ovs-vswitchd.service?
>
Yes. I just provided the changes done on a existing deployment to make
it work. I will incorporate it to templates.

>> This change basically runs ovs with group id of kolla's qemu user
>> (42427). For the solution, my opinion is that we don't require host's
>> qemu (107) user in a containerized deployment. I am planning to ensure
>> that kolla's user id (42427) is updated to the host via the host prep
>> tasks. Let me know if there is any other aspects to be considered.
>>
>
> Might be worth considering overriding the qemu uid/gid to 107 in the
> kolla config file instead.

Definitely it will be good for upgrade to keep the same uid/gid so
that existing DPDK based VMs continue to work after upgrade.

But any other specific reasons for sticking with same qemu uid/gid in
kolla containers? Do you foresee any particular cases related to it?

Regards,
Saravanan KR

>
> Regards,
> Ollie
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] mismatch of user/group id between ovs (baremetal) and libvirt (container)

2017-08-22 Thread Saravanan KR
Hello,

I am working on to integrating DPDK with containerized environment.
Note that DPDK is not yet containerized in tripleo, but this exercise
is to deploy a DPDK workload with the containerized services. I just
want to provide an update regarding an issue.

Currently, OpenvSwitch is running as baremetal service where as
libvirt is containerized. When a VM is created with DPDK network, a
vhost-user socket file will be created by qemu in server mode and ovs
will connect to it as client mode. The socket file will be created on
the host at "/var/lib/vhost_sockets" by the libvrit container, which
is running with qemu user ids as 42427:42427 [1]. Where as
OpenvSwitch, running on baremetal, patched [2] to run with
"Group=qemu", will run with group id as 107.

There is a permission mismatch between the kolla's qemu user id
(42427) and the host machines qemu user id (107), because of which
vhost-use socket creation fails. Till we get ovs containerized,
probably we need to patch ovs to run under kolla's qemu group id
(42427).

With following changes, I am able to get it working.
  chown 42427:42427 /var/lib/vhost_sockets
  sed -i 's/Group=qemu/Group=42427/'
/usr/lib/systemd/system/ovs-vswitchd.service
  systemctl daemon-reload
  systemctl restart openvswitch

This change basically runs ovs with group id of kolla's qemu user
(42427). For the solution, my opinion is that we don't require host's
qemu (107) user in a containerized deployment. I am planning to ensure
that kolla's user id (42427) is updated to the host via the host prep
tasks. Let me know if there is any other aspects to be considered.

Regards,
Saravanan KR

[1] 
https://github.com/openstack/kolla/blob/187b1f08f586327e5c47a0bed3760a575daa1287/kolla/common/config.py#L750
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/pre_network/host_config_and_reboot.yaml#L227

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][tripleo-heat-template] how to get interface name from network name in service profile

2017-08-01 Thread Saravanan KR
On Wed, Aug 2, 2017 at 5:21 AM, Zenghui Shi <z...@redhat.com> wrote:
>
> On Wed, 2 Aug 2017 at 02:34 Ben Nemec <openst...@nemebean.com> wrote:
>>
>>
>>
>> On 07/25/2017 09:53 PM, Zenghui Shi wrote:
>> > Hi,
>> >
>> > Could anyone shed some light on how to get the physical interface name
>> > (e.g eth0) from network name (e.g PublicNetwork, ExternalNetwork) in
>> > tripleo-heat-template service profile ?
>> >
>> > for example:
>> >
>> > I want to add a service profile under puppet/services/time/ptp.pp where
>> > it uses 'PtpInterface' as a parameter to get physical interface name
>> > (please refer to below piece of code), but I'd like to expose a more
>> > user friendly parameter like NetworkName(e.g. provision network,
>> > external network etc) instead of 'PtpInterface' and retrieve the actual
>> > physical interface name from the NetworkName where the physical
>> > interface is connected to, is there any possible way to do this ?
>>
>> I don't think there is.  In many cases the templates don't even know the
>> name of the physical device on which the network will be running.  A
>> simple example would be when a user uses the nicX abstraction to specify
>> interfaces in their net-iso templates.  That doesn't get mapped to an
>> actual interface name until os-net-config runs, and the results of that
>> run are not available to the templates.
>
>
> Thanks Ben!
>
> I'm also thinking if it makes sense to have a way in template or target
> nodes to re-use the results of os-net-config for services which are bonded
> to certain interfaces, or re-implement the os-net-config logic in template
> to get the physical interface name. The latter one will be a repetitive work
> of os-net-config.

This patch [1] has been started by Dan Sneddon to provide the nic
number to name mapping by providing an extra option "--interfaces" to
os-net-config command. May this could be reused to get the mapping.

Regards,
Saravanan KR

[1] https://review.openstack.org/#/c/383516/

>
> Cheers!
> Zenghui
>>
>>
>> >
>> > 
>> > parameters:
>> > [...]
>> >PtpInterface:  #  ---> change this parameter to PtpNetwork
>> >  default: eth0
>> >  description: PTP interfaces name.
>> >  type: string
>> >
>> > resources:
>> >RoleParametersValue
>> >  type: OS::Heat::Value
>> >  properties:
>> >type: json
>> >value: # ---> add logic to get real interface name
>> > from PtpNetwork
>> >  map_replace:
>> >- map_replace:
>> >  - tripleo::profile::base::time::ptp::ptp4l_interface:
>> > PtpInterface
>> >  - values: {get_param: [RoleParameters]}
>> >- values:
>> >PtpInterface: {get_param: PtpInterface}
>> >
>> > outputs:
>> >role_data:
>> >  description: Role ptp using commposable services.
>> >  value:
>> >service_name: ptp
>> >config_settings:
>> >  map_merge:
>> >- get_attr: [RoleParametersValue, value]
>> > [...]
>> > 
>> >
>> > Thanks!
>> > zenghui
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Saravanan KR core

2017-07-31 Thread Saravanan KR
Thank you for your support.

Regards,
Saravanan KR

On Mon, Jul 31, 2017 at 7:19 PM, Emilien Macchi <emil...@redhat.com> wrote:
> On Fri, Jul 21, 2017 at 8:01 AM, Emilien Macchi <emil...@redhat.com> wrote:
>> Saravanan KR has shown an high level of expertise in some areas of
>> TripleO, and also increased his involvement over the last months:
>> - Major contributor in DPDK integration
>> - Derived parameter works
>> - and a lot of other things like improving UX and enabling new
>> features to improve performances and networking configurations.
>>
>> I would like to propose Saravanan part of TripleO core and we expect
>> his particular focus on t-h-t, os-net-config and tripleoclient for now
>> but we hope to extend it later.
>>
>> As usual, we'll vote :-)
>
> Votes were positive, it's done now.
> Saravanan has now +2 and we expect him to use it on THT, os-net-config
> and tripleoclient for now.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Deprecated Parameters Warning

2017-07-17 Thread Saravanan KR
Thanks Emilien.

Now, the warning message for using a deprecated message is available
for CLI. Sample message from CI [1] looks like below:

2017-07-14 19:45:09 | WARNING: Following parameters are deprecated and
still defined. Deprecated parameters will be removed soon!
2017-07-14 19:45:09 |   NeutronL3HA

The next step is to add a warning (or rather error) message if a
deployment contains a parameter which is not part of the plan
(including custom templates). I will work on it.

Regards,
Saravanan KR

[1] 
http://logs.openstack.org/77/479277/6/check-tripleo/gate-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024/fb07fd6/logs/undercloud/home/jenkins/overcloud_deploy.log.txt.gz#_2017-07-14_19_45_09

On Tue, Jun 6, 2017 at 9:47 PM, Emilien Macchi <emil...@redhat.com> wrote:
> On Tue, Jun 6, 2017 at 6:53 AM, Saravanan KR <skram...@redhat.com> wrote:
>> Hello,
>>
>> I am working on a patch [1] to list the deprecated parameters of the
>> current plan. It depends on a heat patch[2] which provides
>> parameter_group support for nested stacks. The change is to add a new
>> workflow to analyze the plan templates and find out the list of
>> deprecated parameters, identified by parameter_groups with label
>> "deprecated".
>>
>> This workflow can be used by CLI and UI to provide a warning to the
>> user about the deprecated parameters. This is only the listing,
>> changes are required in tripleoclient to invoke and and provide
>> warning. I am sending this mail to update the group, to bring
>> awareness on the parameter deprecation.
>
> I find this feature very helpful, specially with all the THT
> parameters that we have and that are moving quite fast over the
> cycles.
> Thanks for working on it!
>
>> Regards,
>> Saravanan KR
>>
>> [1] https://review.openstack.org/#/c/463949/
>> [2] https://review.openstack.org/#/c/463941/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Deriving Parameters from Introspection Data

2017-07-17 Thread Saravanan KR
On Sun, Jul 16, 2017 at 6:10 AM, Don maillist <dlw.mail...@gmail.com> wrote:
> Looks interesting. Wish I had this or something like it now for Newton and
> OVS 2.6.1 which just dropped. Wondering why you don't include the grub
> command line?
KernelArgs parameter which will have iommu and huge page args are
derived as part of this workflow, which will be applied to grub. Are
you looking for any specific parameter?

>
> Do you have a stand alone utility?
Not as of now. But we are looking in to the possibility of developing
a utility tool for using it for Newton. I will post it when we have
it.

Regards,
Saravanan KR

>
> Best Regards,
> Don
>
> On Thu, Jul 6, 2017 at 4:10 AM, Saravanan KR <skram...@redhat.com> wrote:
>>
>> Hello,
>>
>> DPDK is integrated with TripleO deployment during the newton cycle.
>> From then on, we used to get queries on how to decide the right
>> parameters for the deployment, which cpus to choose, how much memory
>> to allocate and there on.
>>
>> In Pike, a new feature "derive parameters", has been brought in to
>> help operators to automatically derive the parameters from the
>> introspection data. I have created a 2 mins demo [1] to illustrate the
>> feature integrated with CLI. This demo is created by integrating the
>> in-progress patches. Let me know if you have any comments.
>>
>> The feature is almost at the last leg with the help from many folks.
>> Following are the list of patches pending:
>> https://review.openstack.org/#/c/480525/ (tripleoclient)
>> https://review.openstack.org/#/c/468989/ (tripleo-common)
>> https://review.openstack.org/#/c/471462/ (tripleo-common)
>>
>> Regards,
>> Saravanan KR
>>
>> [1] https://asciinema.org/a/127903
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Migration from Neutron ML2OVS to OVN

2017-07-13 Thread Saravanan KR
On Tue, Jul 11, 2017 at 11:40 PM, Ben Nemec <openst...@nemebean.com> wrote:
>
>
> On 07/11/2017 10:17 AM, Numan Siddique wrote:
>>
>> Hello Tripleo team,
>>
>> I have few questios regarding migration from neutron ML2OVS to OVN. Below
>> are some of the requirements
>>
>>   - We want to migrate an existing depoyment from Neutroon default ML2OVS
>> to OVN
>>   - We are targetting this for tripleo Queen's release.
>>   - The plan is to first upgrade the tripleo deployment from Pike to
>> Queens with no changes to neutron. i.e with neutron ML2OVS. Once the upgrade
>> is done, we want to migrate to OVN.
>>   - The migration process will stop all the neutron agents, configure
>> neutron server to load OVN mechanism driver and start OVN services (with no
>> or very limited datapath downtime).
>>   - The migration would be handled by an ansible script. We have a PoC
>> ansible script which can be found here [1]
>>
>> And the questions are
>> -  (A broad question) - What is the right way to migrate and switch the
>> neutron plugin ? Can the stack upgrade handle the migration as well ?
This is going to be a broader problem as it is also require to migrate
ML2OvS to ODL for NFV deployments, pretty much at the same timeline.
If i understand correctly, this migration involves stopping services
of ML2OVS (like neutron-ovs-agent) and starting the corresponding new
ML2 (OVN or ODL), along with few parameter additions and removals.

>> - The migration procedure should be part of tripleo ? or can it be a
>> standalone ansible script ? (I presume it should be former).
Each service has upgrade steps which can be associated via ansible
steps. But this is not a service upgrade. It disables an existing
service and enables a new service. So I think, it would need an
explicit disabled service [1], stop the required service. And enabled
the new service.

>> - If it should be part of the tripleo then what would be the command to do
>> it ? A update stack command with appropriate environment files for OVN ?
>> - In case the migration can be done  as a standalone script, how to handle
>> later updates/upgrades since tripleo wouldn't be aware of the migration ?
>
I would also discourage doing it standalone.

Another area which needs to be looked is that, should it be associated
with containers upgrade? May be OVN and ODL can be migrated as
containers only instead of baremetal by default (just a thought, could
have implications to be worked/discussed out).

Regards,
Saravanan KR

[1] 
https://github.com/openstack/tripleo-heat-templates/tree/master/puppet/services/disabled

>
> This last point seems like the crux of the discussion here.  Sure, you can
> do all kinds of things to your cloud using standalone bits, but if any of
> them affect things tripleo manages (which this would) then you're going to
> break on the next stack update.
>
> If there are things about the migration that a stack-update can't handle,
> then the migration process would need to be twofold: 1) Run the standalone
> bits to do the migration 2) Update the tripleo configuration to match the
> migrated config so stack-updates work.
>
> This is obviously a complex and error-prone process, so I'd strongly
> encourage doing it in a tripleo-native fashion instead if at all possible.
>
>>
>>
>> Request to provide your comments so that we can move in the right
>> direction.
>>
>> [1] - https://github.com/openstack/networking-ovn/tree/master/migration
>>
>> Thanks
>> Numan
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Deriving Parameters from Introspection Data

2017-07-06 Thread Saravanan KR
Hello,

DPDK is integrated with TripleO deployment during the newton cycle.
From then on, we used to get queries on how to decide the right
parameters for the deployment, which cpus to choose, how much memory
to allocate and there on.

In Pike, a new feature "derive parameters", has been brought in to
help operators to automatically derive the parameters from the
introspection data. I have created a 2 mins demo [1] to illustrate the
feature integrated with CLI. This demo is created by integrating the
in-progress patches. Let me know if you have any comments.

The feature is almost at the last leg with the help from many folks.
Following are the list of patches pending:
https://review.openstack.org/#/c/480525/ (tripleoclient)
https://review.openstack.org/#/c/468989/ (tripleo-common)
https://review.openstack.org/#/c/471462/ (tripleo-common)

Regards,
Saravanan KR

[1] https://asciinema.org/a/127903

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Deprecated Parameters Warning

2017-06-05 Thread Saravanan KR
Hello,

I am working on a patch [1] to list the deprecated parameters of the
current plan. It depends on a heat patch[2] which provides
parameter_group support for nested stacks. The change is to add a new
workflow to analyze the plan templates and find out the list of
deprecated parameters, identified by parameter_groups with label
"deprecated".

This workflow can be used by CLI and UI to provide a warning to the
user about the deprecated parameters. This is only the listing,
changes are required in tripleoclient to invoke and and provide
warning. I am sending this mail to update the group, to bring
awareness on the parameter deprecation.

Regards,
Saravanan KR

[1] https://review.openstack.org/#/c/463949/
[2] https://review.openstack.org/#/c/463941/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] DPDK Containerization

2017-05-09 Thread Saravanan KR
Hello,

We are analyzing the work items needed to containerize DPDK. There is
a work in progress in the kolla image repo for adding DPDK support
[1]. By default, the kolla image will NOT have DPDK support, but only
when enabled at build time. This is because for ubuntu the openvswitch
package is different for DPDK enabled openvswitch. In case of rhel
family, the openvswitch package is only one which has both regular and
DPDK enabled openvswitch. DPDK can be enabled at run time. We can
maintain this run time enabling of DPDK for the containers too. So the
openvswitch containers will have DPDK packages installed, but disabled
by default. Puppet configuration will enable it, if needed.

In order to achieve it, we need to add a template override file for
DPDK based on kolla review [1]. This override should be applicable to
all the openvswitch container images. It can be achieved by adding a
property in overcloud_containers.yaml list to provide specific
template override files, like:

  container_images:
  - imagename: tripleoupstream/centos-binary-aodh-api:latest
  - imagename: tripleoupstream/centos-binary-openvswitch-db-server:latest
overrides:
  - openvswitch-dpdk-override.j2

Let me know if this is something workable. I know we have to modify
the image builder logic, but let me know if the direction is correct.

Another important item to look is that DPDK is enabled by
puppet-vswitch [2] by invoking the ovs-vsctl command. As I remember
the discussions around creating conf files out of puppet execution,
copying to host and mounting to the actual container is the approach
we are using. I am still figuring out to see how ovs-vsctl command can
be invoked using this model.

Containerization of openvswitch is looked up as separate effort by
another team, which will be the base requirement for the DPDK to work.
But openvswitch container need to be started before running the
"NetworkDeployment" which will run the os-net-config. This brings
another requirement of how to containerize os-net-config. It is tricky
as it is not associated with any puppet/service, it has to brought up
individually/separately, with the dependency of openvswitch. I am
still getting familiar with the blocks around this.

I will be updating the analysis and the progress, any suggestions or
directions is more than welcome.

Regards,
Saravanan KR

[1] https://review.openstack.org/#/c/342354/
[2] 
https://github.com/openstack/puppet-vswitch/blob/master/manifests/dpdk.pp#L113

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] How to Preview the Overcloud Stack?

2017-03-28 Thread Saravanan KR
It is possible to perform a stack validate after jinja2 processing.
There is already an action existing [1] do it and get the results. But
the results are not easy to consume, so it required to flatten the
heat resource tree and parameters. This has been done already in UI,
and i have added a patch for doing it in tripleo-common [2]. The
output of this flattening will look like [3][4].

I am not sure if this is what you are expecting.

Regards,
Saravanan KR

[1] 
https://github.com/openstack/tripleo-common/blob/master/tripleo_common/actions/parameters.py#L71
[2] https://review.openstack.org/#/c/450021/
[3] http://paste.openstack.org/show/600292/
[4] http://paste.openstack.org/show/600293/

On Tue, Mar 28, 2017 at 2:13 AM, Dan Sneddon <dsned...@redhat.com> wrote:
> I've been trying to figure out a workflow for previewing the results of
> importing custom templates in an overcloud deployment (without actually
> deploying). For instance, I am overriding some parameters using custom
> templates, and I want to make sure those parameters will be expressed
> correctly when I deploy.
>
> I know about "heat stack-preview", but between the complexity of the
> overcloud stack and the jinja2 template processing, I can't figure out a
> way to preview the entire overcloud stack.
>
> Is this possible? If not, any hints on what would it take to write a
> script that would accomplish this?
>
> --
> Dan Sneddon |  Senior Principal Software Engineer
> dsned...@redhat.com |  redhat.com/openstack
> dsneddon:irc|  @dxs:twitter
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Sample Roles

2017-03-16 Thread Saravanan KR
Thanks Alex. This is really an important requirement. Today, product
documentation has such roles incorporated to it, which is not good.
This patch simplifies it.

A suggestion though, instead of an external generation tool, is it not
possible to include the individual roles yaml files directly to the a
parent yam file? Why to add one extra step. May be we can provide all
options in the list and say enabled as 0 or 1. Much simpler to use.

This external tool can be an mistral action to derive the final roles
list internally.

Regards,
Saravanan KR

On Thu, Mar 16, 2017 at 3:37 AM, Emilien Macchi <emil...@redhat.com> wrote:
> On Wed, Mar 15, 2017 at 5:28 PM, Alex Schultz <aschu...@redhat.com> wrote:
>> Ahoy folks,
>>
>> For the Pike cycle, we have a blueprint[0] to provide a few basic
>> environment configurations with some custom roles.  For this effort
>> and to reduce the complexity when dealing with roles I have put
>> together a patch to try and organize roles in a more consumable
>> fashion[1].  The goal behind this is that we can document the standard
>> role configurations and also be able to ensure that when we add a new
>> OS::TripleO::Service::* we can make sure they get applied to all of
>> the appropriate roles.  The goal of this initial change is to also
>> allow us all to reuse the same roles and work from a single
>> configuration repository.  Please also review the existing roles in
>> the review and make sure we're not missing any services.
>
> Sounds super cool!
>
>> Also my ask is that if you have any standard roles, please consider
>> publishing them to the new roles folder[1] so we can also identify
>> future CI testing scenarios we would like to support.
>
> Can we document it here maybe?
> https://docs.openstack.org/developer/tripleo-docs/developer/tht_walkthrough/tht_walkthrough.html
>
>> Thanks,
>> -Alex
>>
>> [0] 
>> https://blueprints.launchpad.net/tripleo/+spec/example-custom-role-environments
>> [1] https://review.openstack.org/#/c/445687/
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints for DPDK in OvS2.6

2017-02-13 Thread Saravanan KR
Oops. Forgot to update, We have changed the name of the BP as the
"dot" in the BP name was not going well with the gerrit reviews.

The new URLs are (dot changed to dash):
  https://blueprints.launchpad.net/tripleo/+spec/ovs-2-6-dpdk
  https://blueprints.launchpad.net/tripleo/+spec/ovs-2-6-features-dpdk

And thanks for the confirmation.

Regards,
Saravanan KR

On Mon, Feb 13, 2017 at 9:36 PM, Emilien Macchi <emil...@redhat.com> wrote:
> On Wed, Feb 8, 2017 at 1:59 AM, Saravanan KR <skram...@redhat.com> wrote:
>> Hello,
>>
>> We have raised 2 BP for OvS2.6 integration with DPDK support.
>>
>> Basic Migration -
>> https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-dpdk (Targeted
>> for March)
>> OvS 2.6 Features -
>> https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-features-dpdk
>> (Targeted for Pike)
>
> Both links are 404, any idea of what happenned?
>
> Other than that, I don't see any blocker to have these blueprints in Pike 
> cycle.
>
> Thanks!
>
>> We find the changes to be straight forward and minor. And the required
>> changes has been updated on the BP description. Please let us know if
>> it requires a spec.
>>
>> Regards,
>> Saravanan KR
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Populating Hiera data before NetworkDeployment

2017-02-08 Thread Saravanan KR
Hello,

We are facing an issue in enabling DPDK in openvswitch. Currently,
DPDK is enabled by configuring DPDK_OPTIONS at Step4 using
puppet-vswitch (vswitch::dpdk) module. This flow has limitations:
* if DPDK is used for Tenant Network, Ping Test (AllNodesValidation) will fail
* for DPDK bond, operator will NOT be able to choose the primary via
network config
* also when DPDK port is added in os-net-config without DPDK support,
openvswitch will throw errors, but a restart of openvswitch will fix
* in specific deployments, restarting openvswitch requires
network.service and neutron-openvswitch-agent services to be restarted
after a wait time

To overcome these issues, we are analyzing the option of initializing
DPDK in openvswtich before the NetworkDeployment[1] (os-net-config).
PreNetworkConfig[2] is the best place to embed this requirement.

But in order to keep the DPDK initialization as puppet manifests
(vswitch::dpdk) and invoke it from PreNetworkConfig, the hiera data
needs to be  populated (NovaComputeDeployment[3]) before the
NetworkDeployment. Of course, we can invoke the puppet class by
providing all the parameters directly, but if it is done via hiera
data, it provides flexibility to operators to override heira values
(role-specific or node-specific) and its priority. To experiment it, I
have created a review [4] with the change for moving
NovaComputeDeployment before NetworkDeployment. I have modified this
for all roles, so that all roles follow same approach in deployment.
The change in the resource dependency has been updated in the commit
message.

Please provided your feedback on this approach.

Regards,
Saravanan KR

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/compute-role.yaml#L364
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/compute-role.yaml#L348
[3] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/compute-role.yaml#L440
[4] https://review.openstack.org/#/c/430215/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Blueprints for DPDK in OvS2.6

2017-02-07 Thread Saravanan KR
Hello,

We have raised 2 BP for OvS2.6 integration with DPDK support.

Basic Migration -
https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-dpdk (Targeted
for March)
OvS 2.6 Features -
https://blueprints.launchpad.net/tripleo/+spec/ovs-2.6-features-dpdk
(Targeted for Pike)

We find the changes to be straight forward and minor. And the required
changes has been updated on the BP description. Please let us know if
it requires a spec.

Regards,
Saravanan KR

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks Giulio for adding it to PTG discussion pad. I am not yet sure
of my presence in PTG. Hoping that things will fall in place soon.

We have spent a considerable about of time in moving from static roles
to composable roles. If we are planning to introduce static profiles,
then after a while we will end up with the same problem, and
definitely, it actually depends on how the features will be composed
on a role. Looking forward.

Regards,
Saravanan KR

On Mon, Jan 23, 2017 at 6:25 PM, Giulio Fidente <gfide...@redhat.com> wrote:
> On 01/23/2017 11:07 AM, Saravanan KR wrote:
>> Thanks John for the info.
>>
>> I am going through the spec in detail. And before that, I had few
>> thoughts about how I wanted to approach this, which I have drafted in
>> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
>> 100% ready yet, I was still working on it.
>
> I've linked this etherpad for the session we'll have at the PTG
>
>> As of now, there are few differences on top of my mind, which I want
>> to highlight, I am still going through the specs in detail:
>> * Profiles vs Features - Considering a overcloud node as a profiles
>> rather than a node which can host these features, would have
>> limitations to it. For example, if i need a Compute node to host both
>> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
>> have to create a profile like -
>> hci_enterprise_many_small_vms_with_dpdk? The first one is not
>> appropriate and the later is not scaleable, may be something else in
>> your mind?
>> * Independent - The initial plan of this was to be independent
>> execution, also can be added to deploy if needed.
>> * Not to expose/duplicate parameters which are straight forward, for
>> example tuned-profile name should be associated with feature
>> internally, Workflows will decide it.
>
> for all of the above, I think we need to decide if we want the
> optimizations to be profile-based and gathered *before* the overcloud
> deployment is started or if we want to set these values during the
> overcloud deployment basing on the data we have at runtime
>
> seems like both approaches have pros and cons and this would be a good
> conversation to have with more people at the PTG
>
>> * And another thing, which I couldn't get is, where will the workflow
>> actions be defined, in THT or tripleo_common?
>
> to me it sounds like executing the workflows before stack creation is
> started would be fine, at least for the initial phase
>
> running workflows from Heat depends on the other blueprint/session we'll
> have about the WorkflowExecution resource and once that will be
> available, we could trigger the workflow execution from tht if beneficial
>
>> The requirements which I thought of, for deriving workflow are:
>> Parameter Deriving workflow should be
>> * independent to run the workflow
>> * take basic parameters inputs, for easy deployment, keep very minimal
>> set of mandatory parameters, and rest as optional parameters
>> * read introspection data from Ironic DB and Swift-stored blob
>>
>> I will add these comments as starting point on the spec. We will work
>> towards bringing down the differences, so that operators headache is
>> reduced to a greater extent.
>
> thanks
>
> --
> Giulio Fidente
> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Saravanan KR
Thanks John for the info.

I am going through the spec in detail. And before that, I had few
thoughts about how I wanted to approach this, which I have drafted in
https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
100% ready yet, I was still working on it.

As of now, there are few differences on top of my mind, which I want
to highlight, I am still going through the specs in detail:
* Profiles vs Features - Considering a overcloud node as a profiles
rather than a node which can host these features, would have
limitations to it. For example, if i need a Compute node to host both
Ceph (OSD) and DPDK, then the node will have multiple profiles or we
have to create a profile like -
hci_enterprise_many_small_vms_with_dpdk? The first one is not
appropriate and the later is not scaleable, may be something else in
your mind?
* Independent - The initial plan of this was to be independent
execution, also can be added to deploy if needed.
* Not to expose/duplicate parameters which are straight forward, for
example tuned-profile name should be associated with feature
internally, Workflows will decide it.
* And another thing, which I couldn't get is, where will the workflow
actions be defined, in THT or tripleo_common?


The requirements which I thought of, for deriving workflow are:
Parameter Deriving workflow should be
* independent to run the workflow
* take basic parameters inputs, for easy deployment, keep very minimal
set of mandatory parameters, and rest as optional parameters
* read introspection data from Ironic DB and Swift-stored blob

I will add these comments as starting point on the spec. We will work
towards bringing down the differences, so that operators headache is
reduced to a greater extent.

Regards,
Saravanan KR

On Fri, Jan 20, 2017 at 9:56 PM, John Fulton <johfu...@redhat.com> wrote:
> On 01/11/2017 11:34 PM, Saravanan KR wrote:
>>
>> Thanks John, I would really appreciate if you could tag me on the
>> reviews. I will do the same for mine too.
>
>
> Hi Saravanan,
>
> Following up on this, have a look at the OS::Mistral::WorflowExecution
> Heat spec [1] to trigger Mistral workflows. I'm hoping to use it for
> deriving THT parameters for optimal resource isolation in HCI
> deployments as I mentioned below. I have a spec [2] which describes
> the derivation of the values, but this is provided as an example for
> the more general problem of capturing the rules used to derive the
> values so that deployers may easily apply them.
>
> Thanks,
>   John
>
> [1] OS::Mistral::WorflowExecution https://review.openstack.org/#/c/267770/
> [2] TripleO Performance Profiles https://review.openstack.org/#/c/423304/
>
>> On Wed, Jan 11, 2017 at 8:03 PM, John Fulton <johfu...@redhat.com> wrote:
>>>
>>> On 01/11/2017 12:56 AM, Saravanan KR wrote:
>>>>
>>>>
>>>> Thanks Emilien and Giulio for your valuable feedback. I will start
>>>> working towards finalizing the workbook and the actions required.
>>>
>>>
>>>
>>> Saravanan,
>>>
>>> If you can add me to the review for your workbook, I'd appreciate it. I'm
>>> trying to solve a similar problem, of computing THT params for HCI
>>> deployments in order to isolate resources between CephOSDs and
>>> NovaComputes,
>>> and I was also looking to use a Mistral workflow. I'll add you to the
>>> review
>>> of any related work, if you don't mind. Your proposal to get NUMA info
>>> into
>>> Ironic [1] helps me there too. Hope to see you at the PTG.
>>>
>>> Thanks,
>>>   John
>>>
>>> [1] https://review.openstack.org/396147
>>>
>>>
>>>>> would you be able to join the PTG to help us with the session on the
>>>>> overcloud settings optimization?
>>>>
>>>>
>>>> I will come back on this, as I have not planned for it yet. If it
>>>> works out, I will update the etherpad.
>>>>
>>>> Regards,
>>>> Saravanan KR
>>>>
>>>>
>>>> On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente <gfide...@redhat.com>
>>>> wrote:
>>>>>
>>>>>
>>>>> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>>>>>> would like to see if the approach of deriving THT parameter based on
>>>>>> introspection data, with a high level input would be feasible.
>>>>>>
>>>>>

Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-11 Thread Saravanan KR
Thanks John, I would really appreciate if you could tag me on the
reviews. I will do the same for mine too.

Regards,
Saravanan KR

On Wed, Jan 11, 2017 at 8:03 PM, John Fulton <johfu...@redhat.com> wrote:
> On 01/11/2017 12:56 AM, Saravanan KR wrote:
>>
>> Thanks Emilien and Giulio for your valuable feedback. I will start
>> working towards finalizing the workbook and the actions required.
>
>
> Saravanan,
>
> If you can add me to the review for your workbook, I'd appreciate it. I'm
> trying to solve a similar problem, of computing THT params for HCI
> deployments in order to isolate resources between CephOSDs and NovaComputes,
> and I was also looking to use a Mistral workflow. I'll add you to the review
> of any related work, if you don't mind. Your proposal to get NUMA info into
> Ironic [1] helps me there too. Hope to see you at the PTG.
>
> Thanks,
>   John
>
> [1] https://review.openstack.org/396147
>
>
>>> would you be able to join the PTG to help us with the session on the
>>> overcloud settings optimization?
>>
>> I will come back on this, as I have not planned for it yet. If it
>> works out, I will update the etherpad.
>>
>> Regards,
>> Saravanan KR
>>
>>
>> On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente <gfide...@redhat.com>
>> wrote:
>>>
>>> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>>>
>>>>
>>>> Hello,
>>>>
>>>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>>>> would like to see if the approach of deriving THT parameter based on
>>>> introspection data, with a high level input would be feasible.
>>>>
>>>> Let me brief on the complexity of certain parameters, which are
>>>> related to DPDK. Following parameters should be configured for a good
>>>> performing DPDK cluster:
>>>> * NeutronDpdkCoreList (puppet-vswitch)
>>>> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
>>>> review)
>>>> * NovaVcpuPinset (puppet-nova)
>>>>
>>>> * NeutronDpdkSocketMemory (puppet-vswitch)
>>>> * NeutronDpdkMemoryChannels (puppet-vswitch)
>>>> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
>>>> * Interface to bind DPDK driver (network config templates)
>>>>
>>>> The complexity of deciding some of these parameters is explained in
>>>> the blog [1], where the CPUs has to be chosen in accordance with the
>>>> NUMA node associated with the interface. We are working a spec [2], to
>>>> collect the required details from the baremetal via the introspection.
>>>> The proposal is to create mistral workbook and actions
>>>> (tripleo-common), which will take minimal inputs and decide the actual
>>>> value of parameters based on the introspection data. I have created
>>>> simple workbook [3] with what I have in mind (not final, only
>>>> wireframe). The expected output of this workflow is to return the list
>>>> of inputs for "parameter_defaults",  which will be used for the
>>>> deployment. I would like to hear from the experts, if there is any
>>>> drawbacks with this approach or any other better approach.
>>>
>>>
>>>
>>> hi, I am not an expert, I think John (on CC) knows more but this looks
>>> like
>>> a good initial step to me.
>>>
>>> once we have the workbook in good shape, we could probably integrate it
>>> in
>>> the tripleo client/common to (optionally) trigger it before every
>>> deployment
>>>
>>> would you be able to join the PTG to help us with the session on the
>>> overcloud settings optimization?
>>>
>>> https://etherpad.openstack.org/p/tripleo-ptg-pike
>>> --
>>> Giulio Fidente
>>> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Saravanan KR
Thanks Emilien and Giulio for your valuable feedback. I will start
working towards finalizing the workbook and the actions required.

> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?
I will come back on this, as I have not planned for it yet. If it
works out, I will update the etherpad.

Regards,
Saravanan KR


On Wed, Jan 11, 2017 at 5:10 AM, Giulio Fidente <gfide...@redhat.com> wrote:
> On 01/04/2017 09:13 AM, Saravanan KR wrote:
>>
>> Hello,
>>
>> The aim of this mail is to ease the DPDK deployment with TripleO. I
>> would like to see if the approach of deriving THT parameter based on
>> introspection data, with a high level input would be feasible.
>>
>> Let me brief on the complexity of certain parameters, which are
>> related to DPDK. Following parameters should be configured for a good
>> performing DPDK cluster:
>> * NeutronDpdkCoreList (puppet-vswitch)
>> * ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under
>> review)
>> * NovaVcpuPinset (puppet-nova)
>>
>> * NeutronDpdkSocketMemory (puppet-vswitch)
>> * NeutronDpdkMemoryChannels (puppet-vswitch)
>> * ComputeKernelArgs (PreNetworkConfig [4]) (under review)
>> * Interface to bind DPDK driver (network config templates)
>>
>> The complexity of deciding some of these parameters is explained in
>> the blog [1], where the CPUs has to be chosen in accordance with the
>> NUMA node associated with the interface. We are working a spec [2], to
>> collect the required details from the baremetal via the introspection.
>> The proposal is to create mistral workbook and actions
>> (tripleo-common), which will take minimal inputs and decide the actual
>> value of parameters based on the introspection data. I have created
>> simple workbook [3] with what I have in mind (not final, only
>> wireframe). The expected output of this workflow is to return the list
>> of inputs for "parameter_defaults",  which will be used for the
>> deployment. I would like to hear from the experts, if there is any
>> drawbacks with this approach or any other better approach.
>
>
> hi, I am not an expert, I think John (on CC) knows more but this looks like
> a good initial step to me.
>
> once we have the workbook in good shape, we could probably integrate it in
> the tripleo client/common to (optionally) trigger it before every deployment
>
> would you be able to join the PTG to help us with the session on the
> overcloud settings optimization?
>
> https://etherpad.openstack.org/p/tripleo-ptg-pike
> --
> Giulio Fidente
> GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-04 Thread Saravanan KR
Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.

This workflow will ease the TripleO UI need to integrate DPDK, as UI
(user) has to choose only the interface for DPDK [and optionally, the
number for CPUs required for PMD and Host]. Of-course, the
introspection should be completed, with which, it will be easy to
deploy a DPDK cluster.

There is a complexity if the cluster contains heterogeneous nodes, for
example a cluster having HP and DELL machines with different CPU
layout, we need to enhance the workflow to take actions based on
roles/nodes, which brings in a requirement of localizing the above
mentioned variables per role. For now, consider this proposal for
homogeneous cluster, if there is a value in this, I will work towards
heterogeneous clusters too.

Please share your thoughts.

Regards,
Saravanan KR


[1] https://krsacme.github.io/blog/post/dpdk-pmd-cpu-list/
[2] https://review.openstack.org/#/c/396147/
[3] https://gist.github.com/krsacme/c5be089d6fa216232d49c85082478419
[4] 
https://review.openstack.org/#/c/411797/6/extraconfig/pre_network/host_config_and_reboot.role.j2.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-13 Thread Saravanan KR
Hi Oliver,

During the deployment, Ironic will start the node with IPA ramdisk,
which will copy the overcloud image to the node's disk, then configure
the grub cfg [1], set the node to local boot and reboot the node.
After which the node will be boot with the overcloud image. So no
reboot required in the overcloud image, as the "deploy steps" will be
run by the IPA itself.

Regards,
Saravanan KR

[1] 
https://github.com/openstack/ironic-python-agent/blob/master/ironic_python_agent/extensions/image.py#L136


On Tue, Dec 13, 2016 at 4:18 PM, Oliver Walsh <owa...@redhat.com> wrote:
> Hi Yolanda,
>
>> these changes will be created by ironic before the image is deployed
>
> The question I'm asking is how are the changed created without a reboot?
>
> Typically when setting this via a manual change or via tuned the
> process is to modify /etc/default/grub, run grub2-mkconfig, and then
> reboot. Are you suggesting we drop in a pre-build grub cfg before
> deployment?
>
> Thanks,
> Ollie
>
> On 13 December 2016 at 10:33, Yolanda Robla Mota <yrobl...@redhat.com> wrote:
>> It won't need a reboot, because these changes will be created by ironic
>> before the image is deployed. So it will boot with the right parameters. The
>> alternative of doing with puppet after the image was deployed, needed a
>> reboot, because the changes were done post-deploy.
>> So ironic build steps are pre-deploy without reboot, puppet changes are
>> post-deploy with a reboot.
>>
>> On Tue, Dec 13, 2016 at 11:24 AM, Oliver Walsh <owa...@redhat.com> wrote:
>>>
>>> Hi,
>>>
>>> Saravanan wrote:
>>> > If ironic "deploy steps" can configure this "tuned" setting and run the
>>> > command
>>>
>>> How does this avoid the reboot?
>>>
>>> Yolanda wrote:
>>> > The idea will be to define custom deployment steps for ironic, like
>>> > including the kernel boot parameters.
>>>
>>> Again, is this avoiding the reboot or just moving it?
>>>
>>> Thanks,
>>> Ollie
>>>
>>> On 13 December 2016 at 09:02, Saravanan KR <skram...@redhat.com> wrote:
>>> > Hi Yolanda,
>>> >
>>> > The flow for "tuned" will be like set the "tuned" configuration files,
>>> > and then activate the profile by running the command "tuned-adm
>>> > tuned-profile-nfv". This command will actually write the required
>>> > configuration files for tuning the host. If ironic "deploy steps" can
>>> > configure this "tuned" setting and run the command, then it is good
>>> > enough.
>>> >
>>> > Regards,
>>> > Saravanan KR
>>> >
>>> > On Tue, Dec 13, 2016 at 1:04 PM, Yolanda Robla Mota
>>> > <yrobl...@redhat.com> wrote:
>>> >> Hi Saravanan
>>> >> Thanks for your comments. With this new module, I guess a reboot is
>>> >> still
>>> >> needed after os-host-config ?
>>> >> Right now we have been guided by TripleO and Ironic people to start
>>> >> using
>>> >> what in Ironic is called "custom deployment steps". An initial spec is
>>> >> reflected here:
>>> >> https://review.openstack.org/#/c/382091
>>> >>
>>> >> The idea will be to define custom deployment steps for ironic, like
>>> >> including the kernel boot parameters. Can that be a solution for your
>>> >> "tuned" needs as well?
>>> >>
>>> >> Best
>>> >> Yolanda
>>> >>
>>> >> On Tue, Dec 13, 2016 at 7:59 AM, Saravanan KR <skram...@redhat.com>
>>> >> wrote:
>>> >>>
>>> >>> Hello,
>>> >>>
>>> >>> Thanks Yolanda for starting the thread. The list of requirements in
>>> >>> the host configuration, related to boot parameters and reboot are:
>>> >>>
>>> >>> * DPDK - For vfio-pci driver binding, iommu support on kernel args is
>>> >>> mandatory, which has to be configured before os-net-config runs
>>> >>> * DPDK & RealTime - Enabling "tuned" profile for nfv or rt, will
>>> >>> update the boot parameters and a reboot is required
>>> >>> * Other items mentioned by Yolanda
>>> >>>
>>> >>> If it is configuring only, the boot parameters, then ironic's deploy
>

Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-13 Thread Saravanan KR
Hi Yolanda,

The flow for "tuned" will be like set the "tuned" configuration files,
and then activate the profile by running the command "tuned-adm
tuned-profile-nfv". This command will actually write the required
configuration files for tuning the host. If ironic "deploy steps" can
configure this "tuned" setting and run the command, then it is good
enough.

Regards,
Saravanan KR

On Tue, Dec 13, 2016 at 1:04 PM, Yolanda Robla Mota <yrobl...@redhat.com> wrote:
> Hi Saravanan
> Thanks for your comments. With this new module, I guess a reboot is still
> needed after os-host-config ?
> Right now we have been guided by TripleO and Ironic people to start using
> what in Ironic is called "custom deployment steps". An initial spec is
> reflected here:
> https://review.openstack.org/#/c/382091
>
> The idea will be to define custom deployment steps for ironic, like
> including the kernel boot parameters. Can that be a solution for your
> "tuned" needs as well?
>
> Best
> Yolanda
>
> On Tue, Dec 13, 2016 at 7:59 AM, Saravanan KR <skram...@redhat.com> wrote:
>>
>> Hello,
>>
>> Thanks Yolanda for starting the thread. The list of requirements in
>> the host configuration, related to boot parameters and reboot are:
>>
>> * DPDK - For vfio-pci driver binding, iommu support on kernel args is
>> mandatory, which has to be configured before os-net-config runs
>> * DPDK & RealTime - Enabling "tuned" profile for nfv or rt, will
>> update the boot parameters and a reboot is required
>> * Other items mentioned by Yolanda
>>
>> If it is configuring only, the boot parameters, then ironic's deploy
>> feature may help, but there are other requirement to enable the
>> "tuned" profile which tunes the host for the required configuration,
>> which also requires reboot, as it will alter the boot parameters. If
>> we can collate the all the configurations which requires reboot
>> together, we will improve the reboot time. And if we reboot before the
>> actual openstack services are started, then the reboot time _may_
>> improve.
>>
>> Can I propose a *new* module for TripleO deployments, like >
>> os-host-config <, which will run after os-collect-config and before
>> os-net-config, then we can collate all the host specific configuration
>> inside this module. This module can be a set of ansible scripts, which
>> will only configure the host. Ofcource the parameter to this module
>> should be provided via os-collect-config. Separating the host
>> configuration will help in the containerized TripleO deployment also.
>>
>> Or any other better alternatives are welcome.
>>
>> Please pour in your views if you think for/against it.
>>
>> Regards,
>> Saravanan KR
>>
>>
>> On Fri, Dec 2, 2016 at 9:31 PM, Yolanda Robla Mota <yrobl...@redhat.com>
>> wrote:
>> > Hi , Dmitry
>> > That's what i didn't get very clear. If all the deployment steps are
>> > pre-imaging as that statement says, or every deploy step could be isolated
>> > and configured somehow.
>> > I'm also a bit confused with that spec, because it mixes the concept of
>> > "deployment steps", will all the changes needed for runtime RAID. Could it
>> > be possible to separate into two separate ones?
>> >
>> > - Original Message -
>> > From: "Dmitry Tantsur" <dtant...@redhat.com>
>> > To: openstack-dev@lists.openstack.org
>> > Sent: Friday, December 2, 2016 3:51:30 PM
>> > Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel
>> > parameters on local boot
>> >
>> > On 12/02/2016 01:28 PM, Yolanda Robla Mota wrote:
>> >> Hi Dmitry
>> >>
>> >> So we've been looking at that spec you suggested, but we are wondering
>> >> if that will be useful for our use case. As the text says:
>> >>
>> >> The ``ironic-python-agent`` project and ``agent`` driver will be
>> >> adjusted to
>> >> support ``get_deploy_steps``. That way, ``ironic-python-agent`` will be
>> >> able
>> >> to declare deploy steps to run prior to disk imaging, and operators
>> >> will be
>> >> able to extend ``ironic-python-agent`` to add any custom step.
>> >>
>> >> Our needs are different, actually we need to create a deployment step
>> >> after imaging. We'd need an step that drops config on /etc/default/grub ,
>> >> and updates it. This is a post-imagin

Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-12 Thread Saravanan KR
Hello,

Thanks Yolanda for starting the thread. The list of requirements in
the host configuration, related to boot parameters and reboot are:

* DPDK - For vfio-pci driver binding, iommu support on kernel args is
mandatory, which has to be configured before os-net-config runs
* DPDK & RealTime - Enabling "tuned" profile for nfv or rt, will
update the boot parameters and a reboot is required
* Other items mentioned by Yolanda

If it is configuring only, the boot parameters, then ironic's deploy
feature may help, but there are other requirement to enable the
"tuned" profile which tunes the host for the required configuration,
which also requires reboot, as it will alter the boot parameters. If
we can collate the all the configurations which requires reboot
together, we will improve the reboot time. And if we reboot before the
actual openstack services are started, then the reboot time _may_
improve.

Can I propose a *new* module for TripleO deployments, like >
os-host-config <, which will run after os-collect-config and before
os-net-config, then we can collate all the host specific configuration
inside this module. This module can be a set of ansible scripts, which
will only configure the host. Ofcource the parameter to this module
should be provided via os-collect-config. Separating the host
configuration will help in the containerized TripleO deployment also.

Or any other better alternatives are welcome.

Please pour in your views if you think for/against it.

Regards,
Saravanan KR


On Fri, Dec 2, 2016 at 9:31 PM, Yolanda Robla Mota <yrobl...@redhat.com> wrote:
> Hi , Dmitry
> That's what i didn't get very clear. If all the deployment steps are 
> pre-imaging as that statement says, or every deploy step could be isolated 
> and configured somehow.
> I'm also a bit confused with that spec, because it mixes the concept of 
> "deployment steps", will all the changes needed for runtime RAID. Could it be 
> possible to separate into two separate ones?
>
> - Original Message -
> From: "Dmitry Tantsur" <dtant...@redhat.com>
> To: openstack-dev@lists.openstack.org
> Sent: Friday, December 2, 2016 3:51:30 PM
> Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
> parameters on local boot
>
> On 12/02/2016 01:28 PM, Yolanda Robla Mota wrote:
>> Hi Dmitry
>>
>> So we've been looking at that spec you suggested, but we are wondering if 
>> that will be useful for our use case. As the text says:
>>
>> The ``ironic-python-agent`` project and ``agent`` driver will be adjusted to
>> support ``get_deploy_steps``. That way, ``ironic-python-agent`` will be able
>> to declare deploy steps to run prior to disk imaging, and operators will be
>> able to extend ``ironic-python-agent`` to add any custom step.
>>
>> Our needs are different, actually we need to create a deployment step after 
>> imaging. We'd need an step that drops config on /etc/default/grub , and 
>> updates it. This is a post-imaging deploy step, that modifies the base 
>> image. Could ironic support these kind of steps, if there is a base system 
>> to just define per-user steps?
>
> I thought that all deployment operations are converted to steps, with
> partitioning, writing the image, writing the configdrive and installing the 
> boot
> loader being four default ones (as you see, two steps actually happen after 
> the
> image is written).
>
>>
>> The idea we had on mind is:
>> - from tripleo, add a property to each flavor, that defines the boot 
>> parameters:  openstack flavor set compute --property 
>> os:kernel_boot_params='abc'
>> - define a "ironic post-imaging deploy step", that will grab this property 
>> from the flavor, drop it on /etc/default/grub and regenerate it
>> - then on local boot, the proper kernel parameters will be applied
>>
>> What is your feedback there?
>>
>> - Original Message -
>> From: "Dmitry Tantsur" <dtant...@redhat.com>
>> To: openstack-dev@lists.openstack.org
>> Sent: Friday, December 2, 2016 12:44:29 PM
>> Subject: Re: [openstack-dev] [tripleo] [ironic] Need to update kernel 
>> parameters on local boot
>>
>> On 11/28/2016 04:46 PM, Jay Faulkner wrote:
>>>
>>>> On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota <yrobl...@redhat.com> 
>>>> wrote:
>>>>
>>>> Hi, good afternoon
>>>>
>>>> I wanted to start an email thread about how to properly setup kernel 
>>>> parameters on local boot, for our overcloud images on TripleO.
>>>> These parameters may vary depending on the needs of our end users, and 
>>>> even ca

Re: [openstack-dev] [tripleo] Setting kernel args to overcloud nodes

2016-09-21 Thread Saravanan KR
I have been working on the user-data scripts (first-boot) for updating
the kernel args on the overcloud node [1]. The pre-condition is that
the kernel args has to be applied and node has to be restarted before
os-net-config runs.

I got in to problem of provisioning network not getting ip after the
reboot in the user-data script. While investigating, figured out that
network.service starts the nodes on the alpha-numeric order, on which
the first nic is not the one used for provisioning. network.service
initiates a DHCP DISCOVER on it, when it times out, network.service
goes to failed state and all other interfaces are DOWN state. If i
manually bring the interface up (via ipmi console), then all proceeds
fine without any issue.

To overcome this issue, I have written a small script to find out the
provisioning network via metadata (metadata has the mac address of the
provisioning network) and make BOOTPROTO=none on all other interface's
ifcfg files except the provisioning network. There still an issue of
IP not ready at the time of querying metadata, temporarily added a
sleep which solves it. The user-data script [1] has all these fixes
and tested on an baremetal overcloud node.

If anyone has a better way of doing it, you are more than welcome to suggest.

Regards,
Saravanan KR

[1] https://gist.github.com/krsacme/1234bf024ac917c74913827298840c1c

On Wed, Jul 27, 2016 at 6:52 PM, Saravanan KR <skram...@redhat.com> wrote:
> Hello,
>
> We are working on SR-IOV & DPDK tripleo integration. In which, setting
> the kernel args for huge pages, iommu and cpu isolation is required.
> Earlier we were working on setting of kernel args via IPA [1], reasons
> being:
> 1. IPA is installing the boot loader on the overcloud node
> 2. Ironic knows the hardware spec, using which, we can target specific
> args to nodes via introspection rules
>
> As the proposal is to change the image owned file '/etc/default/grub',
> it has been suggested by ironic team to use the instance user data to
> set the kernel args [2][3], instead of IPA. In the suggested approach,
> we are planning to update the file /etc/default/grub, update
> /etc/grub2.cfg and then issue a reboot. Reboot is mandatory because,
> os-net-config will configure the DPDK bridges and ports by binding the
> DPDK driver, which requires kernel args should be set for iommu and
> huge pages.
>
> As discussed on the IRC tripleo meeting, we need to ensure that the
> user data with update of kernel args, does not overlap with any other
> puppet configurations. Please let us know if you have any comments on
> this approach.
>
> Regards,
> Saravanan KR
>
> [1] https://review.openstack.org/#/c/331564/
> [2] 
> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#appending-kernel-parameters-to-boot-instances
> [3] 
> http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html#firstboot-extra-configuration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Requesting FFE for DPDK and SR-IOV Automation

2016-08-29 Thread Saravanan KR
Hello,

We are working on DPDK and SR-IOV Automation in TripleO. We are at the
last leg of the set of patches pending to be merged:


DPDK (waiting for ovb ha and noha CI):
https://review.openstack.org/#/c/361238/ (THT) with +2 and +1s
https://review.openstack.org/#/c/327705/ (THT) lost workflow due to conflict

SR-IOV (waiting for review):
https://review.openstack.org/#/c/361350/ (puppet-tripleo)
https://review.openstack.org/#/c/361430/ (THT)


Both the changes are low impact and only applicable if respective
feature is enabled. If these changes doesn't go through today
(considering long CI queue), we require an FFE (for n3). Please let us
know if you need more details.

Regards,
Saravanan KR

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [tripleo] Host configuration for CPUAffinity and IRQ Pinning

2016-08-08 Thread Saravanan KR
Hello,

For using DPDK, CPUs on compute host has to be isolated between Host,
DPDK PMD and Guests. In order, to configure the host to use only
specified CPUs, the CPUAffinity [1] configuration in
/etc/systemd/system.conf needs to be used. Along with CPUAffinity, IRQ
repining[2] needs to be done, to pin the interrupt requests to the
CPUs dedicated to host processes.

We are planning to do the changes for configuring CPUAffinity and IRQ
repining via puppet. We couldn't relate this configuration to any
existing module. Could you please help us with the direction to enable
these configurations?

Regards,
Saravanan KR


Note: It is possible to use isolcpus via grub parameter, but it has
implications [3] on load balancing. So it is recommended to use
CPUAffinity to restrict CPUs for host process.

[1] 
https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html#CPUAffinity=
[2] 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_MRG/1.3/html/Realtime_Tuning_Guide/sect-Realtime_Tuning_Guide-General_System_Tuning-Interrupt_and_Process_Binding.html
[3] https://lists.freedesktop.org/archives/systemd-devel/2016-July/037187.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Setting kernel args to overcloud nodes

2016-07-27 Thread Saravanan KR
Hello,

We are working on SR-IOV & DPDK tripleo integration. In which, setting
the kernel args for huge pages, iommu and cpu isolation is required.
Earlier we were working on setting of kernel args via IPA [1], reasons
being:
1. IPA is installing the boot loader on the overcloud node
2. Ironic knows the hardware spec, using which, we can target specific
args to nodes via introspection rules

As the proposal is to change the image owned file '/etc/default/grub',
it has been suggested by ironic team to use the instance user data to
set the kernel args [2][3], instead of IPA. In the suggested approach,
we are planning to update the file /etc/default/grub, update
/etc/grub2.cfg and then issue a reboot. Reboot is mandatory because,
os-net-config will configure the DPDK bridges and ports by binding the
DPDK driver, which requires kernel args should be set for iommu and
huge pages.

As discussed on the IRC tripleo meeting, we need to ensure that the
user data with update of kernel args, does not overlap with any other
puppet configurations. Please let us know if you have any comments on
this approach.

Regards,
Saravanan KR

[1] https://review.openstack.org/#/c/331564/
[2] 
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#appending-kernel-parameters-to-boot-instances
[3] 
http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html#firstboot-extra-configuration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][networking-ovs-dpdk] Request to add puppet-dpdk module

2016-07-08 Thread Saravanan KR
Just to add a point, we are *still* working on dpdk. And this is not
the final code. It may grow a little. We are looking in to adaption
networking-ovs-dpdk puppet code into the agreeable format as
vswitch::dpdk. We would be glad to work with Sean in this process.

Regards,
Saravanan KR

On Fri, Jul 8, 2016 at 7:20 PM, Emilien Macchi <emil...@redhat.com> wrote:
> On Fri, Jul 8, 2016 at 9:29 AM, Mooney, Sean K <sean.k.moo...@intel.com> 
> wrote:
>> Is there a reason that you are starting a new project instead of 
>> contributing to
>> The networking-ovs-dpdk puppet module?
>>
>> Networking-ovs-dpdk was created to host both the integration code with 
>> neutron and then deployment tool
>> Support for deploying ovs with dpdk for differnet tools.
>
> That is the wrong way to do imho.
>
> Puppet modules, Ansible playbooks, Chef cookbooks, etc. Are external
> to the repository because they run their own CI and libraries, etc.
> Moving our the Puppet code is an excellent idea and follows OpenStack
> conventions:
> http://governance.openstack.org/reference/projects/puppet-openstack.html
>
> Where we have one Puppet module per component.
> In the case of dpdk, I would even suggest to not create a new project
> and add the 20 lines of code (yeah, all this discussion for 20 lines
> of code [1]) into openstack/puppet-vswitch.
>
> [1] https://github.com/krsacme/puppet-dpdk/blob/master/manifests/config.pp
>
>
> Let me know if you need help for the move,
> Thanks.
>
>> Currently we support devstack and we have developed a puppet module.
>> The puppet module was developed with the express intention of integrating it 
>> with
>> Fuel, packstack and trippleo at a later date. It was created to be a 
>> reusable module for
>> Other tools to use and build on top of.
>>
>> I will be working on kolla support upstream in kolla this cycle with 
>> networking-ovs-dpdk providing
>> Source install support in addition the binary install support that will be 
>> submitted to kolla.
>>
>> A fule plugin(developed in opnfv) was planned to be added to this repo but 
>> that has now been
>> Abandoned as support is been added to fuel core instead.
>>
>> If there is a good technical reason for a separate repo then that is ok but 
>> otherwise it
>> Seams wasteful to start another project to develop a puppet module to 
>> install ovs with dpdk.
>>
>> Are there any featues missing form netoworking-ovs-dpdk puppet module that 
>> you require?
>> it should be noted that we will be adding support for binary installs from 
>> package manages
>> and persistent installs (auto loading kernel driver, persistent binding of 
>> nics) this cycle but if you have
>> any other feature gaps we would be happy to hear about them.
>>
>> Regards
>> Sean.
>>
>>
>>
>>
>>> -Original Message-
>>> From: Saravanan KR [mailto:skram...@redhat.com]
>>> Sent: Friday, July 08, 2016 8:33 AM
>>> To: OpenStack Development Mailing List (not for usage questions) >> d...@lists.openstack.org>
>>> Cc: Emilien Macchi <emac...@redhat.com>; Jaganathan Palanisamy
>>> <jpala...@redhat.com>; Vijay Chundury <vchun...@redhat.com>
>>> Subject: Re: [openstack-dev] [puppet] Request to add puppet-dpdk module
>>>
>>> Also, there is a repository networking-ovs-dpdk[1] for all the dpdk related
>>> changes including puppet. We considered both (puppet-vswitch and networking-
>>> ovs-dpdk).
>>>
>>> And we had chat with Emilien about this. His suggestion is to have it as a 
>>> separate
>>> project to make the modules cleaner like 'puppet-dpdk'.
>>>
>>> Regards,
>>> Saravanan KR
>>>
>>> [1] https://github.com/openstack/networking-ovs-dpdk
>>>
>>> On Fri, Jul 8, 2016 at 2:36 AM, Russell Bryant <rbry...@redhat.com> wrote:
>>> >
>>> >
>>> > On Thu, Jul 7, 2016 at 5:12 AM, Saravanan KR <skram...@redhat.com> wrote:
>>> >>
>>> >> Hello,
>>> >>
>>> >> We are working on blueprint [1] to integrate DPDK with tripleo. In
>>> >> the process, we are planning to add a new puppet module "puppet-dpdk"
>>> >> for the required puppet changes.
>>> >>
>>> >> The initial version of the repository is at github [2]. Note that the
>>> >> changes are not complete yet. It is in progress.
>>> >>
>>> >> Please let us know your views on including this

Re: [openstack-dev] [puppet] Request to add puppet-dpdk module

2016-07-08 Thread Saravanan KR
Thanks Emilien. I definitely agree with the preference for (1). It is
simpler in choosing either vswitch::ovs or vswitch::dpdk for the
deployment.

Regards,
Saravanan KR

On Fri, Jul 8, 2016 at 5:15 PM, Emilien Macchi <emil...@redhat.com> wrote:
> On Fri, Jul 8, 2016 at 2:33 AM, Saravanan KR <skram...@redhat.com> wrote:
>> Also, there is a repository networking-ovs-dpdk[1] for all the dpdk
>> related changes including puppet. We considered both (puppet-vswitch
>> and networking-ovs-dpdk).
>>
>> And we had chat with Emilien about this. His suggestion is to have it
>> as a separate project to make the modules cleaner like 'puppet-dpdk'.
>
> Right, either way would work for me with a slight preference for 1):
> 1) Try to re-use openstack/puppet-vswitch to add dpdk bits (could be fast)
> 2) Move your module to Puppet OpenStack tent (lot of process)
>
> Looking at the code:
> https://github.com/krsacme/puppet-dpdk/blob/master/manifests/config.pp
>
> I honestly thing option 1) is simpler for everyone. You could add a
> vswitch::dpdk class in puppet-vswitch with your bits, and that's it.
>
> What do you think?
>
>> Regards,
>> Saravanan KR
>>
>> [1] https://github.com/openstack/networking-ovs-dpdk
>>
>> On Fri, Jul 8, 2016 at 2:36 AM, Russell Bryant <rbry...@redhat.com> wrote:
>>>
>>>
>>> On Thu, Jul 7, 2016 at 5:12 AM, Saravanan KR <skram...@redhat.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> We are working on blueprint [1] to integrate DPDK with tripleo. In the
>>>> process, we are planning to add a new puppet module "puppet-dpdk" for the
>>>> required puppet changes.
>>>>
>>>> The initial version of the repository is at github [2]. Note that the
>>>> changes are
>>>> not
>>>> complete yet. It is in progress.
>>>>
>>>> Please let us know your views on including this new module.
>>>>
>>>> Regards,
>>>> Saravanan KR
>>>>
>>>> [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-ovs-dpdk
>>>> [2] https://github.com/krsacme/puppet-dpdk
>>>
>>>
>>> I took a quick look at Emilien's request.  In general, including this
>>> functionality in the puppet openstack project makes sense to me.
>>>
>>> It looks like this is installing and configuring openvswitch-dpdk.  Have you
>>> considered integrating DPDK awareness into the existing puppet-vswitch that
>>> configures openvswitch?  Why is a separate puppet-dpdk needed?
>>>
>>> --
>>> Russell Bryant
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>
>
>
> --
> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Request to add puppet-dpdk module

2016-07-08 Thread Saravanan KR
Also, there is a repository networking-ovs-dpdk[1] for all the dpdk
related changes including puppet. We considered both (puppet-vswitch
and networking-ovs-dpdk).

And we had chat with Emilien about this. His suggestion is to have it
as a separate project to make the modules cleaner like 'puppet-dpdk'.

Regards,
Saravanan KR

[1] https://github.com/openstack/networking-ovs-dpdk

On Fri, Jul 8, 2016 at 2:36 AM, Russell Bryant <rbry...@redhat.com> wrote:
>
>
> On Thu, Jul 7, 2016 at 5:12 AM, Saravanan KR <skram...@redhat.com> wrote:
>>
>> Hello,
>>
>> We are working on blueprint [1] to integrate DPDK with tripleo. In the
>> process, we are planning to add a new puppet module "puppet-dpdk" for the
>> required puppet changes.
>>
>> The initial version of the repository is at github [2]. Note that the
>> changes are
>> not
>> complete yet. It is in progress.
>>
>> Please let us know your views on including this new module.
>>
>> Regards,
>> Saravanan KR
>>
>> [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-ovs-dpdk
>> [2] https://github.com/krsacme/puppet-dpdk
>
>
> I took a quick look at Emilien's request.  In general, including this
> functionality in the puppet openstack project makes sense to me.
>
> It looks like this is installing and configuring openvswitch-dpdk.  Have you
> considered integrating DPDK awareness into the existing puppet-vswitch that
> configures openvswitch?  Why is a separate puppet-dpdk needed?
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Request to add puppet-dpdk module

2016-07-07 Thread Saravanan KR
Hello,

We are working on blueprint [1] to integrate DPDK with tripleo. In the
process, we are planning to add a new puppet module "puppet-dpdk" for the
required puppet changes.

The initial version of the repository is at github [2]. Note that the
changes are
​not ​
complete yet. It is in progress.

Please let us know your views on including this new module.

Regards,
Saravanan KR
​

[1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-ovs-dpdk
[2] https://github.com/krsacme/puppet-dpdk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Handling of 202 response code

2015-11-16 Thread Saravanan KR
Thanks Matthias for educating me. As there is no universal solution to 
this issue, I will check if launch instance page strategy will work out 
for this case.


Regards,
Saravanan KR

On 11/04/2015 03:47 PM, Matthias Runge wrote:

On 04/11/15 09:25, Saravanan KR wrote:


There may be multiple solutions:
1) Waiting for the synchronous and then respond
2) Do not trigger page refresh and respond with Operation in progress
3) If there is a mechanism to know delete in progress, do not list the
interface

To decide on the solution, it is important to know how 202 responses
should be handled. Can anyone can help with understanding?

Asynchronous operations are handled in horizon as synchronous
operations. To illustrate that: launch an instance, you'll get
immediately a feedback ("launch instance issued").
But, you don't get a status feedback directly. Horizon pulls nova api
for status updates via ajax calls.

So: there is no solution yet for this. You could duplicate the same
update strategy as in launch instance (on instances page), create a
volume (on volumes table), etc.

In ideal case, one would use something like a message bus to get
notified on changes.

Matthias



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Handling of 202 response code

2015-11-04 Thread Saravanan KR

Hello,

How are the HTTP status code 202 response (from nova) is handled in 
Horizon to know the asynchronous operation is completed?


Background:
I am working on Bug #1506429 [1] where invoking 'Detach Interface' 
initiates the detaching and refreshes the page. But the interface which 
is detached, is not removed from the 'IP Address' list in the instance 
panel view. It is removed if you do a manual page refresh (in browser).


Why:
In Horizon, 'Detach Interface' Action triggers the Nova API [2] which 
returns status code as 202 (Request Accepted and processing 
asynchronously). Without checking for the asynchronous result, the 
request has been responded in horizon as 'Detached' and refreshes the 
page. Since the interface detach is in progress and not completed, it is 
again listed.


There may be multiple solutions:
1) Waiting for the synchronous and then respond
2) Do not trigger page refresh and respond with Operation in progress
3) If there is a mechanism to know delete in progress, do not list the 
interface


To decide on the solution, it is important to know how 202 responses 
should be handled. Can anyone can help with understanding?


Regards,
Saravanan KR

[1] https://bugs.launchpad.net/horizon/+bug/1506429
[2] 
http://developer.openstack.org/api-ref-compute-v2.1.html#deleteAttachedInterface


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev