Re: [openstack-dev] [NOVA] How boot an instance on specific compute with provider-network: physnet1

2016-08-17 Thread Rick Jones

On 08/17/2016 08:25 AM, Kelam, Koteswara Rao wrote:

Hi All,

I have two computes

Compute node 1:
1. physnet3:br-eth0

2. physnet2: br-eth2

Compute node 2:
1. physnet3:br-eth0
2. physnet1:br-eth1
3. physnet2:br-eth2

When I boot an instance with a network of provider-network physnet1,
nova is scheduling it on compute1 but there is no physnet1 on compute1
and it fails.

Is there any mechanism/way to choose correct compute with correct
provider-network?


Well, the --availability-zone option can be given a host name separated 
from an optional actual availability zone identifier by a colon:


nova boot .. --availability-zone :hostname ...

But specifying a specific host rather than just an availability zone 
requires the project to have forced_host (or is it force_host?) 
capabilities.  You could, perhaps, define the two computes to be 
separate availability zones to work around that.


rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-05 Thread Rick Jones

On 08/05/2016 02:52 AM, Kevin Benton wrote:

Sorry I didn't elaborate a bit more, I was replying from my phone. The
agent has logic that calculates the required flows for ports when it
starts up and then reconciles that with the current flows in OVS so it
doesn't disrupt traffic on every restart. The tests for that run
constant pings in the background while constantly calling the restart
logic to ensure no packets are lost.



Thanks.

rick


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Rick Jones

On 08/04/2016 01:39 PM, Kevin Benton wrote:

Yep. Some tests are making sure there are no packets lost. Some are
making sure that stuff starts working eventually.


Not to be pedantic, but what sort of requirement exists that no packets 
be lost?


rick


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Rick Jones

On 08/04/2016 12:04 PM, Kevin Benton wrote:

Yeah, I wasn't thinking when I +2'ed that. There are two use cases for
the pinger, one for ensuring continuous connectivity and one for
eventual connectivity.

I think the revert is okay for a quick fix, but we really need a new
argument to the pinger for strictness to decide which behavior the test
is looking for.


What situation requires continuous connectivity?

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port isActive

2016-06-13 Thread Rick Jones

On 06/10/2016 03:13 PM, Kevin Benton wrote:

Polling should be fine. get_port operations a relatively cheap operation
for Neutron.


Just in principle, I would suggest this polling have a back-off built 
into it.  Poll once, see the port is not yet "up" - wait a semi-random 
short length of time,  poll again, see it is not yet "up" wait a longer 
semi-random length of time, lather, rinse, repeat until you've either 
gotten to the limits of your patience or the port has become "up."


Fixed, short poll intervals can run the risk of congestive collapse "at 
scale."


rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] profiling - eg neutron-vpn-agent

2016-05-23 Thread Rick Jones

On 05/23/2016 01:33 PM, Kevin Benton wrote:

I've found that the issue is that if you interrupt with ctrl-C it won't
write the profile. However, sending it a SIGTERM with the 'kill' command
did the trick when I was using cprofile. I think oslo service calls
os.exit right on SIGINT so the profiler doesn't get a chance to write out.


SIGTERM does seem to result in a file being written-out.  Thanks.  That 
gets me one step closer.


happy benchmarking,

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] profiling - eg neutron-vpn-agent

2016-05-23 Thread Rick Jones

Folks -

I'm looking for suggestions on how to go about profiling the likes of 
neutron-vpn-agent from Liberty.  I have a simple little test - nothing 
stressful - just create 1000 ports with floating IPs on a single private 
network with a CVR router :)  This seems to put the Liberty 
neutron-vpn-agent in situations where it will spend 100% of its time in 
user space doing something.  As the number of ports increases, so too 
does the length of time in this mode.  It ends up spending tens of 
minutes therein t the exclusion of doing anything else.


So, I'd like to profile it.  To see what it thinks it is doing.

I have tried running neutron-vpn-agent "by hand" on the controller 
hosting the vrouter using the "A typical profiling session with python 
2.5 loos like this" part of 
https://wiki.python.org/moin/PythonSpeed/PerformanceTips#Profiling_Code 
(no, I'm not using Python 2.5, that is simply where web searching has 
lead me from the peanut gallery :) )


Alas, it seems that ^C'ing it doesn't have the profile written-out.

So I'm looking for other methods.

happy benchmarking,

rick jones

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-19 Thread Rick Jones
I like larger MTUs, and used to call stateless offloads like TSO/GSO 
"Poor man's Jumbo Frames" but if you can get the stateless offloads 
going, you can go beyond the savings one gets from JumboFrames because a 
TSO/GSO/GRO "segment" can end-up being semi-effectively 32-64KB.


rick jones
PS - don't forget that *everything* in the same broadcast domain must 
have the same MTU


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-15 Thread Rick Jones

On 04/14/2016 07:10 PM, Kenny Ji-work wrote:

Hi all,

In the environment of openstack kilo, I test the bandwidth in the scene
which VxLan being used. The result show that the vxlan can only support
up to 1 gbits bandwidth. Is this a bug or any else issue, or is there
some hotfix to solve the issue? Thank you for answering!


I'm glossing over some details, but broadly speaking, a single network 
flow cannot take advantage of more than one CPU in a system.  And while 
network speeds have been continuing to increase, per-core speeds haven't 
really gone up much over the last five to ten years.


So, to get "speed/link rate" networking stacks have become dependent on 
stateless offloads - Checksum Offload (CKO) TCP Segmentation Offload 
(TSO/GSO) and Generic Receive Offload 9GRO).  And until somewhat 
recently, NICs did not offer stateless offloads for VxLAN-encapsulated 
traffic.  So, one effectively has a "dumb" NIC without stateless 
offloads.  And depending on what sort of processor you have, that limit 
could be down around 1 Gbit/s.  Only some of the more recent 10GbE NICs 
offer stateless offload of VxLAN-encapsulated traffic, and similarly 
their more recent drivers and networking stacks.


In olden days, before the advent of stateless offloads there was a rule 
of thumb - 1 Mbit/s per MHz.  That was with "pure" bare-iron networking 
- no VMs, no encapsulation.  Even then it was a bit hand-wavy, and may 
have originated in the land of SPARC processors.  But hopefully it 
conveys the idea of what it means to lose the stateless offloads.


So, it would be good to know what sort of CPUs are involved (down to the 
model names and frequencies) as well as the NICs involved - again, full 
naming, not just the brand name.


And it is just a paranoid question, but is there any 1 Gbit/s networking 
in your setup at all?


happy benchmarking,

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Rick Jones

On 03/02/2016 02:46 PM, Mike Spreitzer wrote:

Kevin Benton  wrote on 03/02/2016 01:27:27 PM:

 > Does it at least also include the UUID, or is there no way to tell
 > from 'nova show'?

No direct way to tell, as far as I can see.


Yep.  Best I can find is:

neutron port-list -- --device_id 
then
neutron port-show 

Ironically enough, while nova show shows the security group name, 
neutron port-show shows the UUID.  Clearly an eschewing of foolish 
consistency :)


Drifting... it seems that nova list will sort by instance name 
ascending, and openstack server list will sort by instance name ... 
descending.  And openstack server show will emit a less formatted 
version of the security group name than nova show does.


happy benchmarking,

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-01 Thread Rick Jones

On 03/01/2016 04:29 PM, Preston L. Bannister wrote:


Running "dd" in the physical host against the Cinder-allocated volumes
nets ~1.2GB/s (roughly in line with expectations for the striped flash
volume).

Running "dd" in an instance against the same volume (now attached to the
instance) got ~300MB/s, which was pathetic. (I was expecting 80-90% of
the raw host volume numbers, or better.) Upping read-ahead in the
instance via "hdparm" boosted throughput to ~450MB/s. Much better, but
still sad.

In the second measure the volume data passes through iSCSI and then the
QEMU hypervisor. I expected to lose some performance, but not more than
half!

Note that as this is an all-in-one OpenStack node, iSCSI is strictly
local and not crossing a network. (I did not want network latency or
throughput to be a concern with this first measure.)


Well, not crossing a physical network :)  You will be however likely 
crossing the loopback network on the node.


What sort of per-CPU utilizations do you see when running the test to 
the instance?  Also, out of curiosity, what block size are you using in 
dd?  I wonder how well that "maps" to what iSCSI will be doing.


rick jones
http://www.netperf.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Intra-column wrapping in python-neutronclient

2016-02-24 Thread Rick Jones

On 02/24/2016 09:46 AM, Akihiro Motoki wrote:

cliff-based client including neutronclient and openstackclient
supports various formatters.
I don't think it is a good idea to depend on the output of 'table' formatter.
My understanding is 'table' formatter (the default formatter) is for
human readability.
IMHO it is better to use json formatter or value formatter if we need
to pick up a returned value.
You can use '-f json' or '-f value -c id' with -create.


Be that as it may, it is equally bad if not worse to gratuitously (IMO) 
change output formats.  Even if it is meant to be "only" human-readable.


And it is arguably the case that this new format is less human-readable 
than what was before, even discounting the loss of straight-forward 
cut-and-paste.  And I would not discount the importance of 
straight-forward cut-and-paste.


rick jones



This is the reason I proposed fixes around non-table formatters like
https://review.openstack.org/#/c/255696/ or
https://review.openstack.org/#/c/284088/.

Anyway, it is bad to break devstack :(

2016-02-25 2:32 GMT+09:00 Carl Baldwin :

I ran this by our all-knowing PTL who told me to go back to
cliff==1.15.0.  My devstack had picked up cliff==2.0.0 which is what
seems to have introduced this behavior.  Reverting to the older
version fixed this quirky behavior.

Before today, I didn't even know what cliff was.  :)

Carl

On Wed, Feb 24, 2016 at 10:23 AM, Carl Baldwin  wrote:

Hi,

I've noticed a new behavior from the python-neutronclient which
disturbs me.  For me, this just started happening with my latest build
of devstack which I built yesterday.  It didn't happen with another
recent but little bit older devstack.

The issue is that the client is now wrapping content within columns.
For example:

   
+-+-+--+
   | id  | name| subnets
   |
   
+-+-+--+
   | eb850219-6a42-42ed-ac6a-| public  |
099745e5-4925-4572-a88f- |
   | 927b03a0dc77| | a5376206c892
172.24.4.0/24   |
   | | | 5b6dfb0d-c97e-48ae-
   |
   | | | 98c9-7fe3e1e8e88b
2001:db8::/64  |
   | ec73110f-   | private | 4073b9e7-a58e-4d6e-
   |
   | 86ad-4292-9547-7c2789a7023b | | a2e4-7a45ae899671
10.0.0.0/24|
   | | |
f12aee80-fc13-4adf-a0eb- |
   | | | 706af4319094
fd9d:e27:3eab::/64  |
   
+-+-+--+

Notice how the ids in the first column are wrapped within the column.
I personally don't see this as an aesthetic improvement.  It destroys
by ability to cut and paste the data within this column.  When I
stretch my console out to fix it, I have to rerun the command with the
new window width to fix it.  I used to be able to stretch my console
horizontally and the wrapped would automatically go away.

How can I turned this off now?  Also, can I request that this new
"feature" be disabled by default?

Carl


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Port Query Performance Test

2016-02-03 Thread Rick Jones

On 02/03/2016 05:32 AM, Vega Cai wrote:

Hi all,

I did a test about the performance of port query in Tricircle yesterday.
The result is attached.

Three observations in the test result:
(1) Neutron client costs much more time than curl, the reason may be
neutron client needs to apply for a new token in each run.


Is "needs" a little strong there?  When I have been doing things with 
Neutron CLI at least and needed to issue a lot of requests over a 
somewhat high latency path, I've used the likes of:


token=$(keystone token-get | awk '$2 == "id" {print$4}')
NEUTRON="neutron --os-token=$token --os-url=https://mutter

to avoid grabbing a token each time.  Might that be possible with what 
you are testing?


rick jones

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-25 Thread Rick Jones

On 01/24/2016 07:43 PM, Ian Wells wrote:

Also, I say 9000, but why is 9000 even the right number?


While that may have been a rhetorical question...

Because that is the value Alteon picked in the late 1990s when they 
created the de facto standard for "Jumbo Frames" by including it in 
their Gigabit Ethernet kit as a way to enable the systems of the day to 
have a hope of getting link-rate :)


Perhaps they picked 9000 because it was twice the 4500 of FDDI, which 
itself was selected to allow space for 4096 bytes of data and then a 
good bit of headers.



rick jones

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-20 Thread Rick Jones

On 01/20/2016 08:56 AM, Sean M. Collins wrote:

On Tue, Jan 19, 2016 at 08:15:18AM EST, Matt Kassawara wrote:

No. However, we ought to determine what happens when both DHCP and RA
advertise it.


We'd have to look at the RFCs for how hosts are supposed to behave since
IPv6 has a minimum MTU of 1280 bytes while IPv4's minimum mtu is 576
(what is this, an MTU for ants?).


Quibble - 576 is the IPv4 minimum, maximum MTU.  That is to say a 
compliant IPv4 implementation must be able to reassemble datagrams of at 
least 576 bytes.


If memory serves, the actual minimum MTU for IPv4 is 68 bytes.

rick jones

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-02 Thread Rick Jones

On 06/02/2015 12:32 AM, Ian Wells wrote:

The fix should work fine.  It is technically a workaround for the way
checksums work in virtualised systems, and the unfortunate fact that
some DHCP clients check checksums on packets where the hardware has
checksum offload enabled.  (This doesn't work due to an optimisation in
the way QEMU treats packet checksums.  You'll see the problem if your
machine is running the VM on the same host as its DHCP server and the VM
has a vulnerable client.)


Is that specific to DHCP clients, or does this issue affect UDP traffic 
in general?


rick jones

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron API rate limiting

2015-05-18 Thread Rick Jones

On 05/18/2015 02:01 PM, Chris Friesen wrote:

On 05/18/2015 09:54 AM, Rick Jones wrote:



Interestingly enough, what I've come across mostly (virtually
entirely) has been compromised instances being used in sending
spewage out onto the Big Bad Internet (tm).

One thing I was thinking about to detect such instances was simply
looking at the ratio of inbound and outbound traffic on the
instances' tap device(s). Once it crossed a certain threshold
declare the instance suspect and in need of further scrutiny.


Wouldn't that also catch things like streaming audio/video servers which
would be mostly outbound traffic?


It might catch those using UDP.  In my not-completely-fleshed-out, 
hand-waving scenario that would be part of the further scrutiny.


I guess I'm just hesitant to add more things on iptables, capable as it 
might be.  Using iptables means still needing the linux bridge with OVS 
right?  To implement the security groups in the first place.  Seems 
there are cases where the veth pair joining linux bridge to OVS can 
re-order traffic :(  http://www.spinics.net/lists/netdev/msg327867.html .


rick

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron API rate limiting

2015-05-18 Thread Rick Jones

On 05/15/2015 08:32 PM, Gal Sagie wrote:

What i was describing in [2] is different, maybe the name "rate-limit"
is wrong here and what we are doing is more of
a "brute force prevention" .
We are trying to solve common scenarios for east-west security attack
vectors, for example a common vector is a compromised
VM trying to port scan the network.


Interestingly enough, what I've come across mostly (virtually entirely) 
has been compromised instances being used in sending spewage out onto 
the Big Bad Internet (tm).


One thing I was thinking about to detect such instances was simply 
looking at the ratio of inbound and outbound traffic on the instances' 
tap device(s).  Once it crossed a certain threshold declare the instance 
suspect and in need of further scrutiny.



By matching these port-scanning with "rate-limit" security rules we can
detect this and either block
that traffic or alert the admin/user.
(An example of a "rate-limit" rule would be a VM pinging the same IP
with different ports on a short period of time)

For that there is the security API extension that is needed and the
reference implementation, and we should discuss
them both on the session [3] and how to support extending the API
furthur for other user cases/vendors, hope to see you there!


Alas, I won't be able to attend :(


[1] https://review.openstack.org/#/c/88599/
[2] https://review.openstack.org/#/c/151247/
[3] https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction

On Fri, May 15, 2015 at 10:01 PM, Rick Jones mailto:rick.jon...@hp.com>> wrote:

On May 14, 2015 9:26 PM, "Gal Sagie" mailto:gal.sa...@gmail.com><mailto:gal.sa...@gmail.com
<mailto:gal.sa...@gmail.com>>> wrote:
Hello Ryan,

We have proposed a spec to liberty to add rate limit
functionality to security groups [1].
We see two big use cases for it, one as you mentioned is DDoS
for east-west and another
is brute force prevention (for example port scanning).

We are re-writing the spec as an extension to the current API,
we also have a proposal
to enhance the Security Group / FWaaS implementation in order to
make it easily extendible by such
new classes of security rules.

We are planning to discuss all of that in the SG/FWaaS future
directions session [2].
I or Lionel will update you as soon as we have the fixed spec
for review, and feel free to come to the discussion
as we are more then welcoming everyone to help this effort.

Gal.

[1] https://review.openstack.org/#/c/151247/
[2]
https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction


Isn't there already described a way to rate-limit instances
(overall) by putting qdiscs onto their tap devices?

Having looked only briefly at the spec, I would say you want to have
the option to "MARK" that traffic which is EDN enabled the rate
limiting might have otherwise dropped.

The extant mechanism I mentioned uses HTB in one direction (instance
inbound/tap outbound) and a policing filter in the other.  I've used
it (as a mostly end user) and have noticed that as described, one
can introduce non-trivial bufferbloat inbound to the instance.

And I've always wished (as in if wishes were horses) that the
instance outbound throttle were actually implemented in a way where
it becomes immediately apparent to the instance by causing the tx
queue in the instance to build-up.  That wouldn't be something on a
tap device though.

Does there need to be both a packet and bit rate limit?  I've some
experience with bit rate limits and have seen otherwise rather
throttled (bitrate) instances cause non-trivial problems with a
network node.

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards ,

The G.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron API rate limiting

2015-05-15 Thread Rick Jones

On May 14, 2015 9:26 PM, "Gal Sagie" 
mailto:gal.sa...@gmail.com>> wrote:
Hello Ryan,

We have proposed a spec to liberty to add rate limit functionality to security 
groups [1].
We see two big use cases for it, one as you mentioned is DDoS for east-west and 
another
is brute force prevention (for example port scanning).

We are re-writing the spec as an extension to the current API, we also have a 
proposal
to enhance the Security Group / FWaaS implementation in order to make it easily 
extendible by such
new classes of security rules.

We are planning to discuss all of that in the SG/FWaaS future directions 
session [2].
I or Lionel will update you as soon as we have the fixed spec for review, and 
feel free to come to the discussion
as we are more then welcoming everyone to help this effort.

Gal.

[1] https://review.openstack.org/#/c/151247/
[2] https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction


Isn't there already described a way to rate-limit instances (overall) by 
putting qdiscs onto their tap devices?


Having looked only briefly at the spec, I would say you want to have the 
option to "MARK" that traffic which is EDN enabled the rate limiting 
might have otherwise dropped.


The extant mechanism I mentioned uses HTB in one direction (instance 
inbound/tap outbound) and a policing filter in the other.  I've used it 
(as a mostly end user) and have noticed that as described, one can 
introduce non-trivial bufferbloat inbound to the instance.


And I've always wished (as in if wishes were horses) that the instance 
outbound throttle were actually implemented in a way where it becomes 
immediately apparent to the instance by causing the tx queue in the 
instance to build-up.  That wouldn't be something on a tap device though.


Does there need to be both a packet and bit rate limit?  I've some 
experience with bit rate limits and have seen otherwise rather throttled 
(bitrate) instances cause non-trivial problems with a network node.


rick jones

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Controlling data sent to client

2015-03-10 Thread Rick Jones

On 03/10/2015 11:45 AM, Omkar Joshi wrote:

Hi,

I am using open stack swift server. Now say multiple clients are
requesting 5GB object from server. The rate at which server can push
data into server socket is much more than the rate at which client can
read it from proxy server. Is there configuration / setting which we use
to control / cap the pending data on server side socket? Because
otherwise this will cause server to go out of memory.


The Linux networking stack will have a limit to the size of the 
SO_SNDBUF, which will limit how much the proxy server code will be able 
to shove into a given socket at one time.  The Linux networking stack 
may "autotune" that setting if the proxy server code itself isn't making 
an explicit setsockopt(SO_SNDBUF) call.  Such autotuning will be 
controlled via the sysctl net.ipv4.tcp_wmem


If the proxy server code does make an explicit setsockopt(SO_SNDBUF) 
call, that will be limited to no more than what is set in net.core.wmem_max.


But I am guessing you are asking about something different because 
virtually every TCP/IP stack going back to the beginning has had bounded 
socket buffers.  Are you asking about something else?  Are you asking 
about the rate at which data might come from the object server(s) to the 
proxy and need to be held on the proxy while it is sent-on to the clients?


rick

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Rick Jones

On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:

I’m writing a plan/script to benchmark OVS+OF(CT) vs
OVS+LB+iptables+ipsets,
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.

The plan is to keep all of it in a single multicore host, and make
all the measures within it, to make sure we just measure the
difference due to the software layers.

Suggestions or ideas on what to measure are welcome, there’s an initial
draft here:

https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct


Conditions to be benchmarked

Initial connection establishment time
Max throughput on the same CPU

Large MTUs and stateless offloads can mask a multitude of path-length 
sins.  And there is a great deal more to performance than Mbit/s. While 
some of that may be covered by the first item via the likes of say 
netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a 
focus on Mbit/s (which I assume is the focus of the second item) there 
is something for packet per second performance.  Something like netperf 
TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.


Doesn't have to be netperf, that is simply the hammer I wield :)

What follows may be a bit of perfect being the enemy of the good, or 
mission creep...


On the same CPU would certainly simplify things, but it will almost 
certainly exhibit different processor data cache behaviour than actually 
going through a physical network with a multi-core system.  Physical 
NICs will possibly (probably?) have RSS going, which may cause cache 
lines to be pulled around.  The way packets will be buffered will differ 
as well.  Etc etc.  How well the different solutions scale with cores is 
definitely a difference of interest between the two sofware layers.


rick

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] 10Gbe performance issue.

2015-01-26 Thread Rick Jones

On 01/22/2015 03:19 AM, Piotr Korthals wrote:

Thanks, Rick looks like GRO was something we was missing in our setup.

Here are some results form my tests

iperf with GRO disabled on server side : 2,5-3Gbps
iperf with GRO enabled on server side :  3,5-4 Gbps (gro was enabled on
eth0, br-eth0, br-storage)

Additionally i used OVS VLAN splinters option "Enable OVS VLAN splinters
hard trunks workaround" of fuel deployment

iperf with GRO disabled using hw VLAN splinters and MTU 1,5k : ~5 Gbps
iperf with GRO disabled using hw VLAN splinters and MTU 9k: 9-10 Gbps
iperf with GRO enabled using hw VLAN splinters and MTU 1,5k  : 9-10 Gbps

Then i tested iperf between machines of 2 different configurations (with
OVS VLAN splinters, and without it),

default->OVS_VLAN_spliters (GRO disabled) : 2,5 Gbps
default->OVS_VLAN_spliters (GRO enabled) : 5 Gbps

OVS_VLAN_spliters->default (GRO disabled) : 2,5-3 Gbps
OVS_VLAN_spliters->default (GRO enabled) : 5-10 Gbps

This looks like OVS is not performing good enough in this setup for
tagged vlans (our br-storage is running on tagged vlan)

any commands?


None that come to mind at present, sorry.

rick


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] 10Gbe performance issue.

2015-01-21 Thread Rick Jones

On 01/21/2015 03:20 AM, Skamruk, Piotr wrote:

On Wed, 2015-01-21 at 10:53 +, Skamruk, Piotr wrote:

On Tue, 2015-01-20 at 17:41 +0100, Tomasz Napierala wrote:

[...]
How this was measured? VM to VM? Compute to compute?

[...]
Probably in ~30 minutes we also will have results on plain centos with
mirantis kernel, and on fuel deployed centos with plain centos kernel
(2.6.32 in both cases, but with different patchset subnumber).


OK, our test were done little badly. On plain centos iperf were runned
directly on physical interfaces, but under fuel deployed nodes... We
ware using br-storage interfaces, which in real are openvs based.

So this is not a kernel problem, but this is a single stream over ovs
issue.

So we will investigate this further...



Not sure if iperf will emit it, but you might look at the bytes per 
receive on the receiving end.  Or  you can hang a tcpdump off the 
receiving interface (the br-storage I presume here) and see if you are 
getting the likes of GRO - if you are getting GRO you will see "large" 
TCP segments in the packet trace on the receiving side.  You can do the 
same with the physical interfaces for comparison.


2.5 to 3 Gbit/s "feels" rather like what one would get with 10 GbE in 
the days before GRO/LRO.


happy benchmarking,

rick jones
http://www.netperf.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] NTP settings.

2014-11-14 Thread Rick Jones

On 11/14/2014 04:01 PM, Dmitry Borodaenko wrote:

If NTP server is not reachable on the first boot of the master node,
it should be disabled by bootstrap_admin_node, that eliminates the
possibility of it spontaneously coming to life and changing the clock
for fuel master node and all target nodes in the middle of a
deployment. Then all Nailgun needs to do is pop a warning notification
that no NTP server is configured on the master node, and it should be
fixed manually before starting any deployments. No need to block
deployment altogether: if the user doesn't need need global time at
all (e.g. in an off-the-grid bunker 2 miles beneath Fort Knox),
synchronizing clocks on all environments just to the Fuel master will
still work.


I thought NTP (well ntpd) had an option to tell it to only ever slew the 
clock, never step it?  Or is that only some implementations of NTP?


rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-24 Thread Rick Jones

On 10/23/2014 08:57 PM, Brian Haley wrote:

On 10/23/14 6:22 AM, Elena Ezhova wrote:

Hi!

I am working on a bug "ping still working once connected even after
related security group rule is
deleted" (https://bugs.launchpad.net/neutron/+bug/1335375). The gist of
the problem is the following: when we delete a security group rule the
corresponding rule in iptables is also deleted, but the connection, that
was allowed by that rule, is not being destroyed.
The reason for such behavior is that in iptables we have the following
structure of a chain that filters input packets for an interface of an
istance:



Like Miguel said, there's no easy way to identify this on the compute
node since neither the MAC nor the interface are going to be in the
conntrack command output.  And you don't want to drop the wrong tenant's
connections.

Just wondering, if you remove the conntrack entries using the IP/port
from the router namespace does it drop the connection?  Or will it just
start working again on the next packet?  Doesn't work for VM to VM
packets, but those packets are probably less interesting.  It's just my
first guess.


Presumably this issue affects other conntrack users, no?  What does 
upstream conntrack have to say about the matter?


I tend to avoid such things where I can, but what do "real" firewalls do 
with such matters?  If one removes a rule which allowed a given 
connection through, do they actually go ahead and nuke existing connections?


rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][iperf] Benchmarking network performance

2014-09-04 Thread Rick Jones

On 09/03/2014 11:47 AM, Ajay Kalambur (akalambu) wrote:

Hi
Looking into the following blueprint which requires that network
performance tests be done as part of a scenario
I plan to implement this using iperf and basically a scenario which
includes a client/server VM pair


My experience with netperf over the years has taught me that when there 
is just the single stream and pair of "systems" one won't actually know 
if the performance was limited by inbound, or outbound.  That is why the 
likes of


http://www.netperf.org/svn/netperf2/trunk/doc/examples/netperf_by_flavor.py

and

http://www.netperf.org/svn/netperf2/trunk/doc/examples/netperf_by_quantum.py

apart from being poorly written python :)  Will launch several instances 
of a given flavor and then run aggregate tests on the Instance Under 
Test.  Those aggregate tests will include inbound, outbound, 
bidirectional, aggregate small packet and then a latency test.


happy benchmarking,

rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Rick Jones

On 06/24/2014 02:53 PM, Steve Gordon wrote:

- Original Message -

From: "Rick Jones" 
To: "OpenStack Development Mailing List (not for usage questions)" 


On 06/24/2014 02:38 PM, Joe Gordon wrote:

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything.
Since nova knows something *may* be wrong shouldn't we convey that to
the user (I'm not 100% sure we should myself).


I suspect the user's first action will be to call Support asking "Hey,
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?"

rick jones


The existing alternative would be having the user calling to ask why
their non-responsive instance is showing as RUNNING so you are kind
of damned if you do, damned if you don't.


There will be a call for a non-responsive instance regardless what it 
shows.  However, responsive instance not showing ERROR or UNKNOWN will 
not generate a call.  So, all in all I think you will get fewer calls if 
you don't mark the "not known to be non-responsive" instance as ERROR or 
UNKNOWN.


rick


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in "nova list/show"?

2014-06-24 Thread Rick Jones

On 06/24/2014 02:38 PM, Joe Gordon wrote:

I agree nova shouldn't take any actions. But I don't think leaving an
instance as 'active' is right either.  I was thinking move instance to
error state (maybe an unknown state would be more accurate) and let the
user deal with it, versus just letting the user deal with everything.
Since nova knows something *may* be wrong shouldn't we convey that to
the user (I'm not 100% sure we should myself).


I suspect the user's first action will be to call Support asking "Hey, 
why is my perfectly usable instance showing-up in the ERROR|UNKNOWN state?"


rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Rick Jones

On 03/20/2014 09:07 AM, Yuriy Taraday wrote:

On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones mailto:rick.jon...@hp.com>> wrote:
Interesting result.  Which versions of sudo and ip and with how many
interfaces on the system?


Here are the numbers:

% sudo -V
Sudo version 1.8.6p7
Sudoers policy plugin version 1.8.6p7
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p7
% ip -V
ip utility, iproute2-ss130221
% ip a | grep '^[^ ]' | wc -l
5

For consistency's sake (however foolish it may be) and purposes of
others being able to reproduce results and all that, stating the
number of interfaces on the system and versions and such would be a
Good Thing.


Ok, I'll add them to benchmark output.


Since there are only five interfaces on the system, it likely doesn't 
make much of a difference in your specific benchmark but the 
top-of-trunk version of sudo has the fix/enhancement to allow one to 
tell it via sudo.conf to not grab the list of interfaces on the system.


Might be worthwhile though to take the interface count out to 2000 or 
more in the name of doing things at scale.  Namespace count as well.


happy benchmarking,

rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Rick Jones

On 03/20/2014 05:41 AM, Yuriy Taraday wrote:

On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday mailto:yorik@gmail.com>> wrote:

I'm aiming at ~100 new lines of code for daemon. Of course I'll use
some batteries included with Python stdlib but they should be safe
already.
It should be rather easy to audit them.


Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
   ip a :   4.565ms
  sudo ip a :  13.744ms
sudo rootwrap conf ip a : 102.571ms
 daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
   sudo ip netns exec bench_ns ip a : 162.098ms
 sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
  daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".


Interesting result.  Which versions of sudo and ip and with how many 
interfaces on the system?


For consistency's sake (however foolish it may be) and purposes of 
others being able to reproduce results and all that, stating the number 
of interfaces on the system and versions and such would be a Good Thing.


happy benchmarking,

rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Rick Jones

On 03/05/2014 06:42 AM, Miguel Angel Ajo wrote:


 Hello,

 Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes
it's wrapping itself.

 On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:


Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


I've not been looking at rootwrap, but have been looking at sudo and ip. 
(Using some scripts which create "fake routers" so I could look without 
any of this icky OpenStack stuff in the way :) ) The Ubuntu 12.04 
versions of each at least will enumerate all the interfaces on the 
system, even though they don't need to.


There was already an upstream change to 'ip' that eliminates the 
unnecessary enumeration.  In the last few weeks an enhancement went into 
the upstream sudo that allows one to configure sudo to not do the same 
thing.   Down in the low(ish) three figures of interfaces it may not be 
a Big Deal (tm) but as one starts to go beyond that...


commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8
Author: Stephen Hemminger 
Date:   Thu Mar 28 15:17:47 2013 -0700

ip: remove unnecessary ll_init_map

Don't call ll_init_map on modify operations
Saves significant overhead with 1000's of devices.

http://www.sudo.ws/pipermail/sudo-workers/2014-January/000826.html

Whether your environment already has the 'ip' change I don't know, but 
odd are probably pretty good it doesn't have the sudo enhancement.



That's the time since you reboot a network node, until all namespaces
and services are restored.


So, that includes the time for the system to go down and reboot, not 
just the time it takes to rebuild once rebuilding starts?



If you see appendix "1", this extra 14min overhead, matches with the
fact that rootwrap needs 0.3s to start, and launch a system command
(once filtered).

 14minutes =  840 s.
 (840s. / 192 resources)/0.3s ~= 15 operations /
resource(qdhcp+qrouter) (iptables, ovs port creation & tagging, starting
child processes, etc..)

The overhead comes from python startup time + rootwrap loading.


How much of the time is python startup time?  I assume that would be all 
the "find this lib, find that lib" stuff one sees in a system call 
trace?  I saw a boatload of that at one point but didn't quite feel like 
wading into that at the time.



I suppose that rootwrap was designed for lower amount of system
calls (nova?).


And/or a smaller environment perhaps.


And, I understand what rootwrap provides, a level of filtering that
sudo cannot offer. But it raises some question:

1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

0) already being done: coalescing system calls. But I'm unsure
that's enough. (if we coalesce 15 calls to 3 on this system we get:
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).


It may not be sufficient, but it is (IMO) certainly necessary.  It will 
make any work that minimizes or eliminates the overhead of rootwrap look 
that much better.



a) Rewriting rules into sudo (to the extent that it's possible), and
live with that.
b) How secure is neutron about command injection to that point? How
much is user input filtered on the API calls?
c) Even if "b" is ok , I suppose that if the DB gets compromised,
that could lead to command injection.

d) Re-writing rootwrap into C (it's 600 python LOCs now).

e) Doing the command filtering at neutron-side, as a library and
live with sudo with simple filtering. (we kill the python/rootwrap
startup overhead).

3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)


Certainly going back and forth creating short-lived processes is at 
least anti-social and perhaps ever so slightly upsetting to the process 
scheduler.  Particularly "at scale."  The/a problem is though that the 
Linux networking folks have been somewhat reticent about creating 
libraries (at least any that they would end-up supporting) because they 
have a concern it will lock-in interfaces and reduce their freedom of 
movement.


happy benchmarking,

rick jones
the fastest procedure call is the one you never make



Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation

Re: [openstack-dev] [TripleO][Neutron] PMTUd broken in gre networks

2014-01-22 Thread Rick Jones

On 01/22/2014 03:01 AM, Robert Collins wrote:

I certainly think having the MTU set to the right value is important.
I wonder if there's a standard way we can signal the MTU (e.g. in the
virtio interface) other than DHCP. Not because DHCP is bad, but
because that would work with statically injected network configs as
well.


Can LLDP be used here somehow?  It might require "stretching"  things a 
bit - not all LLDP agents seem to include the information, and it might 
require some sort of "cascade."  It would also require the VM to pay 
attention to the frames as they arrive, but in broad, hand-waving, 
blue-sky theory it could communicate maximum frame size information 
within the broadcast domain.


rick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-13 Thread Rick Jones

On 01/13/2014 07:32 AM, Jay Pipes wrote:

On Mon, 2014-01-13 at 10:23 +, Stephen Gran wrote:

Hi,

I don't think that's what's being asked for. Just that there be more
than the current check for '(isowner of network) or (shared)'

If the data point could be 'enabled for network' for a given tenant,
that would be more flexible.


Agreed, but I believe Mathieu is thinking more in terms of how such a
check could be implemented. What makes this problematic (at least in my
simplistic understanding of Neutron wiring) is that there is no
guarantee that tenant A's subnet does not overlap with tenant B's
subnet. Because Neutron allows overlapping subnets (since Neutron uses
network namespaces for isolating traffic), code would need to be put in
place that says, basically, "if this network is shared between tenants,
then do not allow overlapping subnets, since a single, shared network
namespace will be needed that routes traffic between the tenants".

Or at least, that's what I *think* is part of the problem...


Are such checks actually necessary?  That is to say, unless it will 
completely fubar something internally ina database or something (versus 
just having confused routing), I would think that it would be but a 
nicety for Neutron runtime to warn the user(s) they were about to try to 
connect overlapping subnets to the same router.  Nice to report it 
perhaps as a warning, but not an absolutely required bit of 
functionality to go forward.


If Tenant A and Tenant B were separate, recently merged companies, they 
would have to work-out, in advance, issues of address overlap before 
they could join their two networks.  At one level at least, we could 
consider their trying to do the same sort of thing within the context of 
Neutron as being the same.



FWIW, here is an intra-tenant attempt to assign two overlapping subnets 
to the same router.  Of course I'm probably playing with older bits in 
this particular sandbox and they won't reflect the current top-of-trunk:


$ nova list
+--++++-+---+
| ID   | Name   | Status 
| Task State | Power State | Networks  |

+--++++-+---+
| d97a46ed-19eb-4a87-8536-eb9ca4ba3895 | overlap-net_lg | ACTIVE 
| None   | Running | overlap-net=192.168.123.2 |
| ad8d6c9c-9a4c-442e-aebf-fd30475b7675 | overlap-net0001_lg | ACTIVE 
| None   | Running | overlap-net0001=192.168.123.2 |

+--++++-+---+
$ neutron subnet-list
+--++--+--+
| id   | name   | cidr 
   | allocation_pools |

+--++--+--+
| d6015301-e5bf-4f1a-b3b3-5bde71a52496 | overlap-subnet0001 | 
192.168.123.0/24 | {"start": "192.168.123.2", "end": "192.168.123.254"} |
| faddcc32-7bb6-4cb2-862e-7738e5c54f6d | overlap-subnet | 
192.168.123.0/24 | {"start": "192.168.123.2", "end": "192.168.123.254"} |

+--++--+--+
$ neutron router-create overlap-router0001
Created a new router:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| external_gateway_info |  |
| id| 88339018-d286-45ec-b2d2-ccb78ae78837 |
| name  | overlap-router0001   |
| status| ACTIVE   |
| tenant_id | 57367642563150   |
+---+--+
$ neutron router-interface-add overlap-router0001 overlap-subnet
Added interface b637cb32-c33a-4565-a6f3-b7ea22a02be0 to router 
overlap-router0001.

$ neutron router-interface-add overlap-router0001 overlap-subnet0001
400-{u'QuantumError': u'Bad router request: Cidr 192.168.123.0/24 of 
subnet d6015301-e5bf-4f1a-b3b3-5bde71a52496 overlaps with cidr 
192.168.123.0/24 of subnet faddcc32-7bb6-4cb2-862e-7738e5c54f6d'}


rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Rick Jones

On 01/07/2014 06:30 PM, Ray Sun wrote:

Stackers,
I tried to create a new VM using the driver VMwareVCDriver, but I found
it's very slow when I try to create a new VM, for example, 7GB Windows
Image spent 3 hours.

Then I tried to use curl to upload a iso to vcenter directly.

curl -H "Expect:" -v --insecure --upload-file
windows2012_server_cn_x64.iso
"https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenter&dsName=datastore2";

The average speed is 0.8 MB/s.

Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

I am not sure if there any special configurations for web interface for
vcenter. Please help.


I'm not fully versed in the plumbing, but while you are pushing via curl 
to 200.21.0.99 you might check the netstat statistics at the sending 
side, say once a minute, and see what the TCP retransmission rate 
happens to be.  If 200.21.0.99 has to push the bits to somewhere else 
you should follow that trail back to the point of origin, checking 
statistics on each node as you go.


You could, additionally, try running the likes of netperf (or iperf, but 
I have a natural inclination to suggest netperf...) between the same 
pairs of systems.  If netperf gets significantly better performance then 
you (probably) have an issue at the application layer rather than in the 
networking.


Depending on how things go with those, it may be desirable to get a 
packet trace of the upload via the likes of tcpdump.  It will be very 
much desirable to start the packet trace before the upload so you can 
capture the TCP connection establishment packets (aka the TCP 
SYNchronize segments) as those contain some important pieces of 
information about the capabilities of the connection.


rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-19 Thread Rick Jones

On 11/19/2013 10:02 AM, James Bottomley wrote:

It is possible to extend the Nova APIs to control containers more fully,
but there was resistance do doing this on the grounds that it's
expanding the scope of Nova, hence the new project.


How well received would another CLI/API to learn be among the end-users?

rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron -- creating networks with no assigned tenant

2013-07-16 Thread Rick Jones

On 07/16/2013 02:56 PM, Jay Pipes wrote:

On 07/16/2013 05:14 PM, Nachi Ueno wrote:

Hi Jay

IMO, you are mixing 'What' and "How".
This is my understandings.

"What is needed (Requirement)"
 [Requirement1]  Network and Subnet will be assigned for new tenant
automatically
   by the configuration

"How to do it (Implementation)"
- [nova-network] PreCreate list of networks which is not owned
 -[Neutron]   additional neutron api call on tenant creation
(additional script is needed) or better support (keystone integration)


No, this is not done on tenant creation. That's my point. This was done
on instance launch with communication between Nova and nova-network (and
should be done on instance creation between nova and quantum, IMO)


But *could* it be done on tenant creation?  A "default" network if you 
will - along perhaps with things like default security groups etc...  If 
that were indeed done, then I believe that the pre-quantum, err, 
pre-neutron users would get a network associated with their instances 
without their having to do the five or so neutron commands to create a 
network and get it connected to the rest of the world.


As it stands today under grizzly if launch an instance it will be 
automagically connected to whatever networks my tenant happens to have. 
 So, if there is a network created when the tenant is 
created/instantiated/whatever, what you want to have happen - nova 
instances created connected to a network without need of explicit action 
on the part of the user - will take place no?


rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] CLI command to figure out security-group's association to particular tenant/user

2013-06-28 Thread Rick Jones

On 06/28/2013 01:55 AM, Rahul Sharma wrote:

Thanks Aaron for your kind help. It worked. Is there any doc which lists
all the possible commands and their usage for quantum? because --help
doesn't help in identifying all the parameters, is there any reference
which one can use to get the complete command syntax?


If you use "quantum help " rather than quantum --help, it will 
give you more detailed help about .  For example:


$ quantum help security-group-rule-create
usage: quantum security-group-rule-create [-h]
  [-f {html,json,shell,table,yaml}]
  [-c COLUMN] [--variable VARIABLE]
  [--prefix PREFIX]
  [--request-format {json,xml}]
  [--tenant-id TENANT_ID]
  [--direction {ingress,egress}]
  [--ethertype ETHERTYPE]
  [--protocol PROTOCOL]
  [--port-range-min PORT_RANGE_MIN]
  [--port-range-max PORT_RANGE_MAX]
  [--remote-ip-prefix 
REMOTE_IP_PREFIX]

  [--remote-group-id SOURCE_GROUP]
  SECURITY_GROUP

Create a security group rule.

positional arguments:
  SECURITY_GROUPSecurity group name or id to add rule.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
the xml or json request format
  --tenant-id TENANT_ID
the owner tenant ID
  --direction {ingress,egress}
direction of traffic: ingress/egress
  --ethertype ETHERTYPE
IPv4/IPv6
  --protocol PROTOCOL   protocol of packet
  --port-range-min PORT_RANGE_MIN
starting port range
  --port-range-max PORT_RANGE_MAX
ending port range
  --remote-ip-prefix REMOTE_IP_PREFIX
cidr to match on
  --remote-group-id SOURCE_GROUP
remote security group name or id to apply rule

output formatters:
  output formatter options

  -f {html,json,shell,table,yaml}, --format {html,json,shell,table,yaml}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --variable VARIABLE   specify the variable(s) to include, can be repeated
  --prefix PREFIX   add a prefix to all variable names

rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev