Re: [Openstack-operators] Multiple floating IPs mapped to multiple vNICs (multi-homing)

2016-12-01 Thread Curtis
On Thu, Dec 1, 2016 at 8:23 AM, Paul Browne  wrote:
> On 01/12/16 14:35, Saverio Proto wrote:
>
> Your policy routing looks good.
> The problem must be somewhere else, where you do the nat maybe ?
>
> Go in the network namespace where there is the neutron router with
> address 10.0.16.1
>
> If you tcpdump there what do you see ?
>
> to be 100% sure about the policy routing just go in the network node
> where you do the nat.
>
> ip netns exec qrouter- wget -O /dev/null http://10.0.16.11/
>
> uuid is the uuid of the neutron router where you are natting
>
> I guess this will work.
>
>
> Yes, this does seem to work as expected, in both namespaces;
>
> # Determine the controller hosting the router for 10.0.0.11
> [stack@osp-director-prod ~]$ neutron l3-agent-list-hosting-router
> 5f9d983c-3b51-4411-b921-1a523652d55f
> +--+--++---+--+
> | id   | host |
> admin_state_up | alive | ha_state |
> +--+--++---+--+
> | 37051003-636f-4fb7-b6d9-1ff9d9182e9d | clc-sby4f-n3.mgt.cluster | True
> | :-)   | standby  |
> | b14fcc29-67c0-4420-8abf-433dafde980d | clc-rb15-n1.mgt.cluster  | True
> | :-)   | standby  |
> | 37518a78-4b39-463a-8387-2866989bba06 | clc-ra15-n2.mgt.cluster  | True
> | :-)   | active   |
> +--+--++---+--+
>
> # Change into the namespace and test the 10.0.0.11 web-server
> [root@clc-ra15-n2 ~]# ip netns exec
> qrouter-5f9d983c-3b51-4411-b921-1a523652d55f wget -O /tmp/test
> http://10.0.0.11/; head -n 10 /tmp/test
> --2016-12-01 14:54:09--  http://10.0.0.11/
> Connecting to 10.0.0.11:80... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 10701 (10K) [text/html]
> Saving to: ‘/tmp/test’
> 2016-12-01 14:54:09 (318 MB/s) - ‘/tmp/test’ saved [10701/10701]
>
>
>  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";>
> http://www.w3.org/1999/xhtml";>
>   
> 
> Apache2 Debian Default Page: It works
> 

Re: [Openstack-operators] Multiple floating IPs mapped to multiple vNICs (multi-homing)

2016-12-01 Thread Paul Browne

On 01/12/16 14:35, Saverio Proto wrote:


Your policy routing looks good.
The problem must be somewhere else, where you do the nat maybe ?

Go in the network namespace where there is the neutron router with
address 10.0.16.1

If you tcpdump there what do you see ?

to be 100% sure about the policy routing just go in the network node
where you do the nat.

ip netns exec qrouter- wget -O /dev/nullhttp://10.0.16.11/

uuid is the uuid of the neutron router where you are natting

I guess this will work.


Yes, this does seem to work as expected, in both namespaces;

# Determine the controller hosting the router for 10.0.0.11
[stack@osp-director-prod ~]$ neutron l3-agent-list-hosting-router 
5f9d983c-3b51-4411-b921-1a523652d55f

+--+--++---+--+
| id   | host | 
admin_state_up | alive | ha_state |

+--+--++---+--+
| 37051003-636f-4fb7-b6d9-1ff9d9182e9d | clc-sby4f-n3.mgt.cluster | 
True   | :-)   | standby  |
| b14fcc29-67c0-4420-8abf-433dafde980d | clc-rb15-n1.mgt.cluster  | 
True   | :-)   | standby  |
| 37518a78-4b39-463a-8387-2866989bba06 | clc-ra15-n2.mgt.cluster  | 
True| :-)   | active   |

+--+--++---+--+

# Change into the namespace and test the 10.0.0.11 web-server
[root@clc-ra15-n2 ~]# ip netns exec 
qrouter-5f9d983c-3b51-4411-b921-1a523652d55f wget -O /tmp/test 
http://10.0.0.11/; head -n 10 /tmp/test

--2016-12-01 14:54:09-- http://10.0.0.11/
Connecting to 10.0.0.11:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10701 (10K) [text/html]
Saving to: ‘/tmp/test’
2016-12-01 14:54:09 (318 MB/s) - ‘/tmp/test’ saved [10701/10701]


"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";>

http://www.w3.org/1999/xhtml";>
  

Apache2 Debian Default Page: It works

Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-12-01 Thread Massimo Sgaravatto
Thanks a lot George

It looks like this indeed helps !

Cheers, Massimo

2016-11-30 16:04 GMT+01:00 George Mihaiescu :

> Try changing the following in nova.conf and restart the nova-scheduler:
>
> scheduler_host_subset_size = 10
> scheduler_max_attempts = 10
>
> Cheers,
> George
>
> On Wed, Nov 30, 2016 at 9:56 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi all
>>
>> I have a problem with scheduling in our Mitaka Cloud,
>> Basically when there are a lot of requests for new instances, some of
>> them fail because "Failed to compute_task_build_instances: Exceeded maximum
>> number of retries". And the failures are because "Insufficient compute
>> resources: Free memory 2879.50 MB < requested
>>  8192 MB" [*]
>>
>> But there are compute nodes with enough memory that could serve such
>> requests.
>>
>> In the conductor log I also see messages reporting that "Function
>> 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted
>> interval by xxx sec" [**]
>>
>>
>> My understanding is that:
>>
>> - VM a is scheduled to a certain compute node
>> - the scheduler chooses the same compute node for VM b before the info
>> for that compute node is updated (so the 'size' of VM a is not taken into
>> account)
>>
>> Does this make sense or am I totally wrong ?
>>
>> Any hints about how to cope with such scenarios, besides increasing
>>  scheduler_max_attempts ?
>>
>> scheduler_default_filters is set to:
>>
>> scheduler_default_filters = AggregateInstanceExtraSpecsFil
>> ter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZ
>> oneFilter,RamFilter,CoreFilter,AggregateRamFilter,
>> AggregateCoreFilter,ComputeFilter,ComputeCapabilitiesFilter,
>> ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGr
>> oupAffinityFilter
>>
>>
>> Thanks a lot, Massimo
>>
>> [*]
>>
>> 2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
>> [req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
>> d27fe2becea94a3e980fb9f66e2f29
>> 1a - - -] Failed to compute_task_build_instances: Exceeded maximum number
>> of retries. Exceeded max scheduling attempts 5 for instance
>> 314eccd0-fc73-446f-8138-7d8d3c
>> 8644f7. Last exception: Insufficient compute resources: Free memory
>> 2879.50 MB < requested 8192 MB.
>> 2016-11-30 15:10:20.233 25140 WARNING nova.scheduler.utils
>> [req-ec8c0bdc-b413-4cab-b925-eb8f11212049 840c96b6fb1e4972beaa3d30ade10cc7
>> d27fe2becea94a3e980fb9f66e2f29
>> 1a - - -] [instance: 314eccd0-fc73-446f-8138-7d8d3c8644f7] Setting
>> instance to ERROR state.
>>
>>
>> [**]
>>
>> 2016-11-30 15:10:48.873 25128 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.08 sec
>> 2016-11-30 15:10:54.372 25142 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.33 sec
>> 2016-11-30 15:10:54.375 25140 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.32 sec
>> 2016-11-30 15:10:54.376 25129 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.30 sec
>> 2016-11-30 15:10:54.381 25138 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.24 sec
>> 2016-11-30 15:10:54.381 25139 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.28 sec
>> 2016-11-30 15:10:54.382 25143 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.24 sec
>> 2016-11-30 15:10:54.385 25141 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 9.11 sec
>> 2016-11-30 15:11:01.964 25128 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 3.09 sec
>> 2016-11-30 15:11:05.503 25142 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 1.13 sec
>> 2016-11-30 15:11:05.506 25138 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 1.12 sec
>> 2016-11-30 15:11:05.509 25139 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 1.13 sec
>> 2016-11-30 15:11:05.512 25141 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 1.13 sec
>> 2016-11-30 15:11:05.525 25143 WARNING oslo.service.loopingcall [-]
>> Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run
>> outlasted interval by 1.14

Re: [Openstack-operators] Multiple floating IPs mapped to multiple vNICs (multi-homing)

2016-12-01 Thread Saverio Proto
Your policy routing looks good.
The problem must be somewhere else, where you do the nat maybe ?

Go in the network namespace where there is the neutron router with
address 10.0.16.1

If you tcpdump there what do you see ?

to be 100% sure about the policy routing just go in the network node
where you do the nat.

ip netns exec qrouter- wget -O /dev/null http://10.0.16.11/

uuid is the uuid of the neutron router where you are natting

I guess this will work.

Oh, did you double check the security groups ?

Saverio

2016-12-01 15:18 GMT+01:00 Paul Browne :
> Hello Saverio,
>
> Many thanks for the reply, I'll answer your queries below;
>
> On 01/12/16 12:49, Saverio Proto wrote:
>>
>> Hello,
>>
>> while the problem is in place, you should share the output of
>>
>> ip rule show
>> ip route show table 1
>>
>> It could be just a problem in your ruleset
>
>
> Of course, these are those outputs ;
>
> root@test1:~# ip rule show
> 0:  from all lookup local
> 32764:  from all to 10.0.16.11 lookup rt2
> 32765:  from 10.0.16.11 lookup rt2
> 32766:  from all lookup main
> 32767:  from all lookup default
>
> root@test1:~# ip route show table 1
> default via 10.0.16.1 dev eth1
> 10.0.16.0/24 dev eth1  scope link  src 10.0.16.11
>
>
>>
>> and, which one is your webserver ? can you tcpdump to make sure reply
>> packets get out on the NIC with src address 10.0.16.11 ?
>>
>> Saverio
>
>
> The instance has its two vNICs with source addresses 10.0.0.11 & 10.0.16.11,
> and the web-server is listening on both.
>
> The HTTP packets do seem to be getting out from 10.0.16.11 as source, but
> are stopped elsewhere upstream.
>
> I've attached two pcaps showing HTTP reply packets, one from 10.0.0.11
> (first vNIC; HTTP request and reply works to a remote client) and one from
> 10.0.16.11 (second vNIC; HTTP request is sent, reply not received by remote
> client). In the latter case, the server starts to make retransmissions to
> the remote client.
>
> Kind regards,
> Paul Browne
>
>
>
>
>>
>>
>> 2016-12-01 13:08 GMT+01:00 Paul Browne :
>>>
>>> Hello Operators,
>>>
>>> For reasons not yet amenable to persuasion otherwise, a customer of our
>>> ML2+OVS classic implemented OpenStack would like to map two floating IPs
>>> pulled from two separate external network floating IP pools, to two
>>> different vNICs on his instances.
>>>
>>> The floating IP pools correspond to one pool routable from the external
>>> Internet and another, RFC1918 pool routable from internal University
>>> networks.
>>>
>>> The tenant private networks are arranged as two RFC1918 VXLANs, each with
>>> a
>>> router to one of the two external networks.
>>>
>>> 10.0.0.0/24 -> route to -> 128.232.226.0/23
>>>
>>> 10.0.16.0/24 -> route to -> 172.24.46.0/23
>>>
>>>
>>> Mapping two floating IPs to instances isn't possible in Horizon, but is
>>> possible from command-line. This doesn't immediately work, however, as
>>> the
>>> return traffic from the instance needs to be sent back through the
>>> correct
>>> router gateway interface and not the instance default gateway.
>>>
>>> I'd initially thought this would be possible by placing a second routing
>>> table on the instances to handle the return traffic;
>>>
>>> debian@test1:/etc/iproute2$ less rt_tables
>>> #
>>> # reserved values
>>> #
>>> 255 local
>>> 254 main
>>> 253 default
>>> 0   unspec
>>> #
>>> # local
>>> #
>>> #1  inr.ruhep
>>> 1 rt2
>>>
>>> debian@test1:/etc/network$ less interfaces
>>> # The loopback network interface
>>> auto lo
>>> iface lo inet loopback
>>>
>>> # The first vNIC, eth0
>>> auto eth0
>>> iface eth0 inet dhcp
>>>
>>> # The second vNIC, eth1
>>> auto eth1
>>> iface eth1 inet static
>>>  address 10.0.16.11
>>>  netmask 255.255.255.0
>>>  post-up ip route add 10.0.16.0/24 dev eth1 src 10.0.16.11 table
>>> rt2
>>>  post-up ip route add default via 10.0.16.1 dev eth1 table rt2
>>>  post-up ip rule add from 10.0.16.11/32 table rt2
>>>  post-up ip rule add to 10.0.16.11/32 table rt2
>>>
>>> And this works well for SSH and ICMP, but curiously not for HTTP traffic.
>>>
>>>
>>> Requests to a web-server listening on all vNICs are sent but replies not
>>> received when the requests are sent to the second mapped floating IP
>>> (HTTP
>>> requests and replies work as expected when sent to the first mapped
>>> floating
>>> IP). The requests are logged in both cases however, so traffic is making
>>> it
>>> to the instance in both cases.
>>>
>>> I'd say this is clearly an unusual (and possibly un-natural) arrangement,
>>> but I was wondering whether anyone else on Operators had come across a
>>> similar situation in trying to map floating IPs from two different
>>> external
>>> networks to an instance?
>>>
>>> Kind regards,
>>>
>>> Paul Browne
>>>
>>> --
>>> ***
>>> Paul Browne
>>> Research Computing Platforms
>>> University Information Services
>>> Roger Needham Building
>>> JJ Thompson Avenue
>>> University of Cambridge
>>

Re: [Openstack-operators] Multiple floating IPs mapped to multiple vNICs (multi-homing)

2016-12-01 Thread Paul Browne

Hello Saverio,

Many thanks for the reply, I'll answer your queries below;

On 01/12/16 12:49, Saverio Proto wrote:

Hello,

while the problem is in place, you should share the output of

ip rule show
ip route show table 1

It could be just a problem in your ruleset


Of course, these are those outputs ;

root@test1:~# ip rule show
0:  from all lookup local
32764:  from all to 10.0.16.11 lookup rt2
32765:  from 10.0.16.11 lookup rt2
32766:  from all lookup main
32767:  from all lookup default

root@test1:~# ip route show table 1
default via 10.0.16.1 dev eth1
10.0.16.0/24 dev eth1  scope link  src 10.0.16.11




and, which one is your webserver ? can you tcpdump to make sure reply
packets get out on the NIC with src address 10.0.16.11 ?

Saverio


The instance has its two vNICs with source addresses 10.0.0.11 & 
10.0.16.11, and the web-server is listening on both.


The HTTP packets do seem to be getting out from 10.0.16.11 as source, 
but are stopped elsewhere upstream.


I've attached two pcaps showing HTTP reply packets, one from 10.0.0.11 
(first vNIC; HTTP request and reply works to a remote client) and one 
from 10.0.16.11 (second vNIC; HTTP request is sent, reply not received 
by remote client). In the latter case, the server starts to make 
retransmissions to the remote client.


Kind regards,
Paul Browne






2016-12-01 13:08 GMT+01:00 Paul Browne :

Hello Operators,

For reasons not yet amenable to persuasion otherwise, a customer of our
ML2+OVS classic implemented OpenStack would like to map two floating IPs
pulled from two separate external network floating IP pools, to two
different vNICs on his instances.

The floating IP pools correspond to one pool routable from the external
Internet and another, RFC1918 pool routable from internal University
networks.

The tenant private networks are arranged as two RFC1918 VXLANs, each with a
router to one of the two external networks.

10.0.0.0/24 -> route to -> 128.232.226.0/23

10.0.16.0/24 -> route to -> 172.24.46.0/23


Mapping two floating IPs to instances isn't possible in Horizon, but is
possible from command-line. This doesn't immediately work, however, as the
return traffic from the instance needs to be sent back through the correct
router gateway interface and not the instance default gateway.

I'd initially thought this would be possible by placing a second routing
table on the instances to handle the return traffic;

debian@test1:/etc/iproute2$ less rt_tables
#
# reserved values
#
255 local
254 main
253 default
0   unspec
#
# local
#
#1  inr.ruhep
1 rt2

debian@test1:/etc/network$ less interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The first vNIC, eth0
auto eth0
iface eth0 inet dhcp

# The second vNIC, eth1
auto eth1
iface eth1 inet static
 address 10.0.16.11
 netmask 255.255.255.0
 post-up ip route add 10.0.16.0/24 dev eth1 src 10.0.16.11 table rt2
 post-up ip route add default via 10.0.16.1 dev eth1 table rt2
 post-up ip rule add from 10.0.16.11/32 table rt2
 post-up ip rule add to 10.0.16.11/32 table rt2

And this works well for SSH and ICMP, but curiously not for HTTP traffic.


Requests to a web-server listening on all vNICs are sent but replies not
received when the requests are sent to the second mapped floating IP (HTTP
requests and replies work as expected when sent to the first mapped floating
IP). The requests are logged in both cases however, so traffic is making it
to the instance in both cases.

I'd say this is clearly an unusual (and possibly un-natural) arrangement,
but I was wondering whether anyone else on Operators had come across a
similar situation in trying to map floating IPs from two different external
networks to an instance?

Kind regards,

Paul Browne

--
***
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
University of Cambridge
Cambridge
United Kingdom
E-Mail: pf...@cam.ac.uk
Tel: 0044-1223-46548
***


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
***
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
University of Cambridge
Cambridge
United Kingdom
E-Mail: pf...@cam.ac.uk
Tel: 0044-1223-46548
***



eth0.pcap
Description: application/vnd.tcpdump.pcap


eth1.pcap
Description: application/vnd.tcpdump.pcap
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Multiple floating IPs mapped to multiple vNICs (multi-homing)

2016-12-01 Thread Saverio Proto
Hello,

while the problem is in place, you should share the output of

ip rule show
ip route show table 1

It could be just a problem in your ruleset

and, which one is your webserver ? can you tcpdump to make sure reply
packets get out on the NIC with src address 10.0.16.11 ?

Saverio


2016-12-01 13:08 GMT+01:00 Paul Browne :
> Hello Operators,
>
> For reasons not yet amenable to persuasion otherwise, a customer of our
> ML2+OVS classic implemented OpenStack would like to map two floating IPs
> pulled from two separate external network floating IP pools, to two
> different vNICs on his instances.
>
> The floating IP pools correspond to one pool routable from the external
> Internet and another, RFC1918 pool routable from internal University
> networks.
>
> The tenant private networks are arranged as two RFC1918 VXLANs, each with a
> router to one of the two external networks.
>
> 10.0.0.0/24 -> route to -> 128.232.226.0/23
>
> 10.0.16.0/24 -> route to -> 172.24.46.0/23
>
>
> Mapping two floating IPs to instances isn't possible in Horizon, but is
> possible from command-line. This doesn't immediately work, however, as the
> return traffic from the instance needs to be sent back through the correct
> router gateway interface and not the instance default gateway.
>
> I'd initially thought this would be possible by placing a second routing
> table on the instances to handle the return traffic;
>
> debian@test1:/etc/iproute2$ less rt_tables
> #
> # reserved values
> #
> 255 local
> 254 main
> 253 default
> 0   unspec
> #
> # local
> #
> #1  inr.ruhep
> 1 rt2
>
> debian@test1:/etc/network$ less interfaces
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> # The first vNIC, eth0
> auto eth0
> iface eth0 inet dhcp
>
> # The second vNIC, eth1
> auto eth1
> iface eth1 inet static
> address 10.0.16.11
> netmask 255.255.255.0
> post-up ip route add 10.0.16.0/24 dev eth1 src 10.0.16.11 table rt2
> post-up ip route add default via 10.0.16.1 dev eth1 table rt2
> post-up ip rule add from 10.0.16.11/32 table rt2
> post-up ip rule add to 10.0.16.11/32 table rt2
>
> And this works well for SSH and ICMP, but curiously not for HTTP traffic.
>
>
> Requests to a web-server listening on all vNICs are sent but replies not
> received when the requests are sent to the second mapped floating IP (HTTP
> requests and replies work as expected when sent to the first mapped floating
> IP). The requests are logged in both cases however, so traffic is making it
> to the instance in both cases.
>
> I'd say this is clearly an unusual (and possibly un-natural) arrangement,
> but I was wondering whether anyone else on Operators had come across a
> similar situation in trying to map floating IPs from two different external
> networks to an instance?
>
> Kind regards,
>
> Paul Browne
>
> --
> ***
> Paul Browne
> Research Computing Platforms
> University Information Services
> Roger Needham Building
> JJ Thompson Avenue
> University of Cambridge
> Cambridge
> United Kingdom
> E-Mail: pf...@cam.ac.uk
> Tel: 0044-1223-46548
> ***
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Heat: signaling using SW config/deployment API.

2016-12-01 Thread Pasquale Lepera

Thank you Rami,so, if I got it, looking at the code, the suggested process for a sw deployment in our case is a task series (maybe realized using python-celery) like this:1) Create a swift temp_url for the signaling dedicated to the deployment2) Pass this url, and all the necessary parameters, to the software config as inputs3) Create a deploy with an intitial state IN_PROGRESS4) Monitor the swift_url state for a signal received (or a timeout occurred)5) Update the deploy status in SUCCES/FAILED6) Remove the swift temp_url.Am I right?
 
Regards,Pasquale Lepera
 
 >  _
 >  
 >  
Il 1 dicembre 2016 alle 11.29 Rabi Mishra  ha scritto:
 >  
 >  



Moving to openstack-dev for more visibility and discussion.
 >  
 >  

We currently have signal API for heat resources(not for standalone software config/deployment). However, you can probably use a workaround with swift temp_url like tripleo[1] to achieve your use case.
 >  
 >  We do have rpc api[2] for signalling deployments. It would probably not be that difficult to add REST API support for native/cfn signalling, though I don't know if there are more reasons for it not being added yet.
 >  

 >  

Steve Baker(original author) would probably know more about it and can give you a better answer:)
 >  


 >  




 >  [1] https://github.com/openstack/tripleo-common/blob/master/tripleo_common/actions/deployment.py
 >  [2] https://github.com/openstack/heat/blob/master/heat/engine/service_software_config.py#L262
 >  

 >  On Wed, Nov 30, 2016 at 5:54 PM, Pasquale Lepera  wrote:
 >  






Hi,
 >  we're trying to use the Heat Software configuration APIs, but we're facing a problem with the signaling.

We quite well know how to use Software config/deployment inside stack Templates, and in that case what we got on the target VM is something like this:
 >  
 >  

#os-collect-config --print
 >  inputs:[
 >  …
 >   {
 >    "type": "String",
 >    "name": "deploy_signal_transport",
 >    "value": "CFN_SIGNAL",
 >    "description": "How the server should signal to heat with the deployment output values."
 >   },
 >   {
 >    "type": "String",
 >    "name": "deploy_signal_id",
 >    "value": "http://ctrl-liberty.nuvolacsi.it:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3Ab570fe9ea2c94cb8ba72fe07fa034b62%3Astacks%2FStack_test_from_view_galera-53040%2F15d0e95a-e422-4994-9f17-bb2f543952f7%2Fresources%2Fdeployment_sw_mariadb2?Timestamp=2016-11-24T16%3A35%3A12Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=72ef8cef2e174926b74252754617f347&SignatureVersion=2&Signature=H5QcAv7yIZgBQzhztb4%2B0NJi7Z3qO%2BmwToqINUiKbvw%3D",
 >    "description": "ID of signal to use for signaling output values"
 >   },
 >   {
 >    "type": "String",
 >    "name": "deploy_signal_verb",
 >    "value": "POST",
 >    "description": "HTTP verb to use for signaling output values"
 >   }


 >  This part, we suppose, is generated by heat during the Template processing and is pushed to the target so that, when the deployment is finished, the os-apply-config uses CFN to signal to the orchestrator the SUCCESS/FAILED job.

 
The problem is that, when we try to use directly the software config creation API and the deployment one, what we got in the target VM is something like this:
 
#os-collect-config --print
 >  ...
 > {
 >  "inputs": [],
 >  "group": null,
 >  "name": "test_key_gen_9aznXZ7DE9",
 >  "outputs": [],
 >  "creation_time": "2016-11-24T15:50:50",
 >  "options": {},
 >  "config": "#!/bin/bash\ntouch /tmp/test \nhostname > /tmp/test \n",
 >  "id": "d9395163-4238-4e94-902f-1e8abdbfa2bb"
 > }


 >  This appens because we pass to the create SW config API no explicit parameter in the “inputs” key.
 >  Of course, this config causes no signaling back to Heat.
 >  
 >  So the questions are:

 
Is it possible to use the cfn signaling with the software configuration/deployment creation APIs?
 >  
 >  How?
 >  
 >  Is it possible to have a signaling back to the orchestration without passing manually a deploy_signal_id inside the API's configuration parameters?
 >  
 >  If not, another way to give a signal back to Orchestrator, could be a workaround creating a self-standing stack containing only “OS::Heat::WaitCondition” and “OS::Heat::WaitConditionHandlewaitsignals” resources, but before using this workaround we want to be sure that is not possible in other ways.
 >  
 >  Thanks
 >  
 >  Pasquale


 >  ___
 >   OpenStack-operators mailing list
 >   OpenStack-operators@lists.openstack.org
 >   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 >  
 >  


 >  
 >  --
 >  


Regards,
Rabi Misra

 >  












 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.o

[Openstack-operators] Multiple floating IPs mapped to multiple vNICs (multi-homing)

2016-12-01 Thread Paul Browne

Hello Operators,

For reasons not yet amenable to persuasion otherwise, a customer of our 
ML2+OVS classic implemented OpenStack would like to map two floating IPs 
pulled from two separate external network floating IP pools, to two 
different vNICs on his instances.


The floating IP pools correspond to one pool routable from the external 
Internet and another, RFC1918 pool routable from internal University 
networks.


The tenant private networks are arranged as two RFC1918 VXLANs, each 
with a router to one of the two external networks.


10.0.0.0/24 -> route to -> 128.232.226.0/23

10.0.16.0/24 -> route to -> 172.24.46.0/23


Mapping two floating IPs to instances isn't possible in Horizon, but is 
possible from command-line. This doesn't immediately work, however, as 
the return traffic from the instance needs to be sent back through the 
correct router gateway interface and not the instance default gateway.


I'd initially thought this would be possible by placing a second routing 
table on the instances to handle the return traffic;


debian@test1:/etc/iproute2$ less rt_tables
#
# reserved values
#
255 local
254 main
253 default
0   unspec
#
# local
#
#1  inr.ruhep
1 rt2

debian@test1:/etc/network$ less interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The first vNIC, eth0
auto eth0
iface eth0 inet dhcp

# The second vNIC, eth1
auto eth1
iface eth1 inet static
address 10.0.16.11
netmask 255.255.255.0
post-up ip route add 10.0.16.0/24 dev eth1 src 10.0.16.11 table rt2
post-up ip route add default via 10.0.16.1 dev eth1 table rt2
post-up ip rule add from 10.0.16.11/32 table rt2
post-up ip rule add to 10.0.16.11/32 table rt2

And this works well for SSH and ICMP, but curiously not for HTTP traffic.


Requests to a web-server listening on all vNICs are sent but replies not 
received when the requests are sent to the second mapped floating IP 
(HTTP requests and replies work as expected when sent to the first 
mapped floating IP). The requests are logged in both cases however, so 
traffic is making it to the instance in both cases.


I'd say this is clearly an unusual (and possibly un-natural) 
arrangement, but I was wondering whether anyone else on Operators had 
come across a similar situation in trying to map floating IPs from two 
different external networks to an instance?


Kind regards,

Paul Browne

--
***
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
University of Cambridge
Cambridge
United Kingdom
E-Mail: pf...@cam.ac.uk
Tel: 0044-1223-46548
***

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Heat: signaling using SW config/deployment API.

2016-12-01 Thread Rabi Mishra
Moving to openstack-dev for more visibility and discussion.

We currently have signal API for heat resources(not for standalone software
config/deployment). However, you can probably use a workaround with swift
temp_url like tripleo[1] to achieve your use case.

We do have rpc api[2] for signalling deployments. It would probably not be
that difficult to add REST API support for native/cfn signalling, though I
don't know if there are more reasons for it not being added yet.

Steve Baker(original author) would probably know more about it and can give
you a better answer:)


[1]
https://github.com/openstack/tripleo-common/blob/master/tripleo_common/actions/deployment.py
[2]
https://github.com/openstack/heat/blob/master/heat/engine/service_software_config.py#L262

On Wed, Nov 30, 2016 at 5:54 PM, Pasquale Lepera 
wrote:

> Hi,
> we're trying to use the Heat Software configuration APIs, but we're facing
> a problem with the signaling.
> We quite well know how to use Software config/deployment inside stack
> Templates, and in that case what we got on the target VM is something like
> this:
>
> #os-collect-config --print
> inputs:[
> …
>  {
>   "type": "String",
>   "name": "deploy_signal_transport",
>   "value": "CFN_SIGNAL",
>   "description": "How the server should signal to heat with the
> deployment output values."
>  },
>  {
>   "type": "String",
>   "name": "deploy_signal_id",
>   "value": "http://ctrl-liberty.nuvolacsi.it:8000/v1/signal/arn%
> 3Aopenstack%3Aheat%3A%3Ab570fe9ea2c94cb8ba72fe07fa034b62%
> 3Astacks%2FStack_test_from_view_galera-53040%2F15d0e95a-
> e422-4994-9f17-bb2f543952f7%2Fresources%2Fdeployment_sw_
> mariadb2?Timestamp=2016-11-24T16%3A35%3A12Z&SignatureMethod=HmacSHA256&
> AWSAccessKeyId=72ef8cef2e174926b74252754617f347&
> SignatureVersion=2&Signature=H5QcAv7yIZgBQzhztb4%2B0NJi7Z3q
> O%2BmwToqINUiKbvw%3D",
>   "description": "ID of signal to use for signaling output values"
>  },
>  {
>   "type": "String",
>   "name": "deploy_signal_verb",
>   "value": "POST",
>   "description": "HTTP verb to use for signaling output values"
>  }
>
> This part, we suppose, is generated by heat during the Template processing
> and is pushed to the target so that, when the deployment is finished, the
> os-apply-config uses CFN to signal to the orchestrator the SUCCESS/FAILED
> job.
>
> The problem is that, when we try to use directly the software config
> creation API and the deployment one, what we got in the target VM is
> something like this:
>
> #os-collect-config --print
> ...
>{
> "inputs": [],
> "group": null,
> "name": "test_key_gen_9aznXZ7DE9",
> "outputs": [],
> "creation_time": "2016-11-24T15:50:50",
> "options": {},
> "config": "#!/bin/bash\ntouch /tmp/test \nhostname > /tmp/test \n",
> "id": "d9395163-4238-4e94-902f-1e8abdbfa2bb"
>}
>
> This appens because we pass to the create SW config API no explicit
> parameter in the “inputs” key.
> Of course, this config causes no signaling back to Heat.
>
> So the questions are:
>
> Is it possible to use the cfn signaling with the software
> configuration/deployment creation APIs?
>
> How?
>
> Is it possible to have a signaling back to the orchestration without
> passing manually a deploy_signal_id inside the API's configuration
> parameters?
>
> If not, another way to give a signal back to Orchestrator, could be a
> workaround creating a self-standing stack containing only
> “OS::Heat::WaitCondition” and “OS::Heat::WaitConditionHandlewaitsignals”
> resources, but before using this workaround we want to be sure that is not
> possible in other ways.
>
> Thanks
>
> Pasquale
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
Regards,
Rabi Misra
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators