Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-20 Thread Shraddha Pandhe
Thank you all for the comments.

The client that we expect to call this API with thousands of network-ids is
nova-scheduler.

Since this call is happening in the middle of scheduling, we don't want to
spend time in paginating or sending multiple requests. I have tens of
thousands of networks and subnets in my test cluster right now and with
that scale, the extension takes more than 2 seconds to return. With
multiple calls, scheduler will become very slow.

I agree that sending payload with GET is not recommended and most libraries
just drop the payload for such cases.



On Wed, Jan 20, 2016 at 2:27 PM, Salvatore Orlando <salv.orla...@gmail.com>
wrote:

> I tend to agree with Doug and Ryan's stance. If you need to pass 1000s of
> network-id on a single request you're probably not doing things right on
> the client side.
> As Ryan suggested you can try and split the request in multiple requests
> with acceptable URI lenght and send them in parallel; this will add some
> overhead, but should work flawlessly.
>
> Once tags will be implemented you will be able to leverage those to
> simplify your queries.
>
> Regarding GET requests with plenty of parameters, this discussion came up
> on the mailing list a while ago [1]. A good proposal was made in that
> thread but never formalised as an API-wg guideline; you consider submitting
> a patch to the API-wg too.
> Note however that Neutron won't be able to support it out of the box
> considering its WSGI framework completely ignores request bodies on GET
> requests.
>
> Salvatore
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078243.html
>
> On 20 January 2016 at 12:33, Ryan Brown <rybr...@redhat.com> wrote:
>
>> So having a URI too long error is, in this case, likely an indication
>> that you're requesting too many things at once.
>>
>> You could:
>> 1. Request 100 at a time in parallel
>> 2. Find a query that would give you all those networks & page through the
>> reply
>> 3. Page through all the user's networks and filter client-side
>>
>> How is the user supposed to be assembling this giant UUID list? I'd think
>> it would be easier for them to specify a query (e.g. "get usage data for
>> all my production subnets" or something).
>>
>>
>> On 01/19/2016 06:59 PM, Shraddha Pandhe wrote:
>>
>>> Hi folks,
>>>
>>>
>>> I am writing a Neutron extension which needs to take 1000s of
>>> network-ids as argument for filtering. The CURL call is as follows:
>>>
>>> curl -i -X GET
>>> 'http://hostname:port
>>> /neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
>>> -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
>>> "X-Auth-Token: "
>>>
>>>
>>> The list of net-ids can go up to 1000s. The problem is, with such large
>>> url, I get the "Request URI too long" error. I don't want to update this
>>> limit as proxies can have their own limits.
>>>
>>> What options do I have to send 1000s of network IDs?
>>>
>>> 1. -d '{}' is not a recommended option for GET call and wsgi Controller
>>> drops the data part when routing the request.
>>>
>>> 2. Use POST instead of GET? I will need to write the get_
>>> logic inside create_resource logic for this to work. Its a hack, but
>>> complies with HTTP standard.
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> --
>> Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-19 Thread Shraddha Pandhe
Hi Doug,

What would be the reason for such timeout? Based on my current test, it
doesn't take more than few hundreds of milliseconds to return.

What I am trying to do is,

I have a Neutron extension that returns IP usage per subnet per network. It
needs to support:

1. Return usage info for all networks (default if no filters specified)
2. Return usage info for one network id
3. Return usage info for several network ids. This can go up to 1000
network ids.

I have added similar comments to the existing implementation currently
being reviewed: https://review.openstack.org/#/c/212955/16




On Tue, Jan 19, 2016 at 4:14 PM, Doug Wiegley <doug...@parksidesoftware.com>
wrote:

> It would have to be a POST, but even that will start to have timeout
> issues. An API which requires that kind of input is kind of a bad idea.
> Perhaps you could describe what you’re trying to do?
>
> Thanks,
> doug
>
> On Jan 19, 2016, at 4:59 PM, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
> Hi folks,
>
>
> I am writing a Neutron extension which needs to take 1000s of network-ids
> as argument for filtering. The CURL call is as follows:
>
> curl -i -X GET '
> http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
> -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
> "X-Auth-Token: "
>
>
> The list of net-ids can go up to 1000s. The problem is, with such large
> url, I get the "Request URI too long" error. I don't want to update this
> limit as proxies can have their own limits.
>
> What options do I have to send 1000s of network IDs?
>
> 1. -d '{}' is not a recommended option for GET call and wsgi Controller
> drops the data part when routing the request.
>
> 2. Use POST instead of GET? I will need to write the get_ logic
> inside create_resource logic for this to work. Its a hack, but complies
> with HTTP standard.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][api] GET call with huge argument list

2016-01-19 Thread Shraddha Pandhe
Hi folks,


I am writing a Neutron extension which needs to take 1000s of network-ids
as argument for filtering. The CURL call is as follows:

curl -i -X GET 
'http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: "


The list of net-ids can go up to 1000s. The problem is, with such large
url, I get the "Request URI too long" error. I don't want to update this
limit as proxies can have their own limits.

What options do I have to send 1000s of network IDs?

1. -d '{}' is not a recommended option for GET call and wsgi Controller
drops the data part when routing the request.

2. Use POST instead of GET? I will need to write the get_ logic
inside create_resource logic for this to work. Its a hack, but complies
with HTTP standard.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]Ironic operations on nodes in maintenance mode

2015-11-24 Thread Shraddha Pandhe
On Tue, Nov 24, 2015 at 7:39 AM, Jim Rollenhagen <j...@jimrollenhagen.com>
wrote:

> On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> > Hi,
> >
> > I would like to know how everyone is using maintenance mode and what is
> > expected from admins about nodes in maintenance. The reason I am bringing
> > up this topic is because, most of the ironic operations, including manual
> > cleaning are not allowed for nodes in maintenance. Thats a problem for
> us.
> >
> > The way we use it is as follows:
> >
> > We allow users to put nodes in maintenance mode (indirectly) if they find
> > something wrong with the node. They also provide a maintenance reason
> along
> > with it, which gets stored as "user_reason" under maintenance_reason. So
> > basically we tag it as user specified reason.
> >
> > To debug what happened to the node our operators use manual cleaning to
> > re-image the node. By doing this, they can find out all the issues
> related
> > to re-imaging (dhcp, ipmi, image transfer, etc). This debugging process
> > applies to all the nodes that were put in maintenance either by user, or
> by
> > system (due to power cycle failure or due to cleaning failure).
>
> Interesting; do you let the node go through cleaning between the user
> nuking the instance and doing this manual cleaning stuff?
>

Do you mean automated cleaning? If so, yes we let that go through since
thats allowed in maintenance mode.

>
> At Rackspace, we leverage the fact that maintenance mode will not allow
> the node to proceed through the state machine. If a user reports a
> hardware issue, we maintenance the node on their behalf, and when they
> delete it, it boots the agent for cleaning and begins heartbeating.
> Heartbeats are ignored in maintenance mode, which gives us time to
> investigate the hardware, fix things, etc. When the issue is resolved,
> we remove maintenance mode, it goes through cleaning, then back in the
> pool.


What is the provision state when maintenance mode is removed? Does it
automatically go back into the available pool? How does a user report a
hardware issue?

Due to large scale, we can't always assure that someone will take care of
the node right away. So we have some automation to make sure that user's
quota is freed.

1. If a user finds some problem with the node, the user calls our break-fix
extension (with reason for break-fix) which deletes the instance for the
user and frees the quota.
2. Internally nova deletes the instance and calls destroy on virt driver.
This follows the normal delete flow with automated cleaning.
3. We have an automated tool called Reparo which constantly monitors the
node list for nodes in maintenance mode.
4. If it finds any nodes in maintenance, it runs one round of manual
cleaning on it to check if the issue was transient.
5. If cleaning fails, we need someone to take a look at it.
6. If cleaning succeeds, we put the node back in available pool.

This is only way we can scale to hundreds of thousands of nodes. If manual
cleaning was not allowed in maintenance mode, our operators would hate us :)

If the provision state of the node is such a way that the node cannot be
picked up by the scheduler, we can remove maintenance mode and run manual
cleaning.


> We used to enroll nodes in maintenance mode, back when the API put them
> in the available state immediately, to avoid them being scheduled to
> until we knew they were good to go. The enroll state solved this for us.
>
> Last, we use maintenance mode on available nodes if we want to
> temporarily pull them from the pool for a manual process or some
> testing. This can also be solved by the manageable state.
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ipam][devstack] How to use IPAM reference in devstack

2015-11-23 Thread Shraddha Pandhe
Hi John,

Thanks for letting me know. How do I setup fresh devstack with pluggable
IPAM enabled in the meantime?

On Mon, Nov 23, 2015 at 5:34 PM, John Belamaric <jbelama...@infoblox.com>
wrote:

> It currently only supports greenfield deployments. See [1] - we plan to
> build a migration in the Mitaka time frame.
>
> John
>
> [1] https://bugs.launchpad.net/neutron/+bug/1516156
>
>
>
> On Nov 23, 2015, at 8:05 PM, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
> Hi folks,
>
> What is the right way to use ipam reference implementation with devstack?
> When setup devstack, I didnt have the setting
>  ipam_driver = internal
>
> I changed it afterwards. But now when I try to create a port, I get this
> error:
>
> 2015-11-23 21:23:00.078 ERROR neutron.ipam.drivers.neutrondb_ipam.driver
> [req-e6b5ed3c-9fb0-4c0b-a5b9-c8fdecea0cf0 admin
> 82e842f0f1b548c28f11be231744ee24] IPAM subnet referenced to Neutron subnet
> a1ddebd1-e63b-41af-a1d3-d5611da46f7e does not exist
>
> MariaDB [neutron]> select * from ipamsubnets;
> Empty set (0.00 sec)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ipam][devstack] How to use IPAM reference in devstack

2015-11-23 Thread Shraddha Pandhe
Hi Thanh,

Thanks a lot. That helped.



On Mon, Nov 23, 2015 at 8:45 PM, thanh le giang <legiangth...@gmail.com>
wrote:

> Hi Shraddha
>
> You can add this line  iniset $NEUTRON_CONF DEFAULT ipam_driver "internal"
> at the end of function configure_neutron in lib/neutron-legacy file
>
> Thanh
>
> 2015-11-24 11:20 GMT+07:00 Shraddha Pandhe <spandhe.openst...@gmail.com>:
>
>> Hi John,
>>
>> Thanks for letting me know. How do I setup fresh devstack with pluggable
>> IPAM enabled in the meantime?
>>
>> On Mon, Nov 23, 2015 at 5:34 PM, John Belamaric <jbelama...@infoblox.com>
>> wrote:
>>
>>> It currently only supports greenfield deployments. See [1] - we plan to
>>> build a migration in the Mitaka time frame.
>>>
>>> John
>>>
>>> [1] https://bugs.launchpad.net/neutron/+bug/1516156
>>>
>>>
>>>
>>> On Nov 23, 2015, at 8:05 PM, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
>>> Hi folks,
>>>
>>> What is the right way to use ipam reference implementation with
>>> devstack? When setup devstack, I didnt have the setting
>>>  ipam_driver = internal
>>>
>>> I changed it afterwards. But now when I try to create a port, I get this
>>> error:
>>>
>>> 2015-11-23 21:23:00.078 ERROR neutron.ipam.drivers.neutrondb_ipam.driver
>>> [req-e6b5ed3c-9fb0-4c0b-a5b9-c8fdecea0cf0 admin
>>> 82e842f0f1b548c28f11be231744ee24] IPAM subnet referenced to Neutron subnet
>>> a1ddebd1-e63b-41af-a1d3-d5611da46f7e does not exist
>>>
>>> MariaDB [neutron]> select * from ipamsubnets;
>>> Empty set (0.00 sec)
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>>> ?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> L.G.Thanh
>
> Email: legiangt...@gmail.com <legiangth...@gmail.com>
> lgth...@fit.hcmus.edu.vn
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic]Ironic operations on nodes in maintenance mode

2015-11-23 Thread Shraddha Pandhe
Hi,

I would like to know how everyone is using maintenance mode and what is
expected from admins about nodes in maintenance. The reason I am bringing
up this topic is because, most of the ironic operations, including manual
cleaning are not allowed for nodes in maintenance. Thats a problem for us.

The way we use it is as follows:

We allow users to put nodes in maintenance mode (indirectly) if they find
something wrong with the node. They also provide a maintenance reason along
with it, which gets stored as "user_reason" under maintenance_reason. So
basically we tag it as user specified reason.

To debug what happened to the node our operators use manual cleaning to
re-image the node. By doing this, they can find out all the issues related
to re-imaging (dhcp, ipmi, image transfer, etc). This debugging process
applies to all the nodes that were put in maintenance either by user, or by
system (due to power cycle failure or due to cleaning failure).

This is how we use maintenance mode in Ironic.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ipam][devstack] How to use IPAM reference in devstack

2015-11-23 Thread Shraddha Pandhe
Hi folks,

What is the right way to use ipam reference implementation with devstack?
When setup devstack, I didnt have the setting
 ipam_driver = internal

I changed it afterwards. But now when I try to create a port, I get this
error:

2015-11-23 21:23:00.078 ERROR neutron.ipam.drivers.neutrondb_ipam.driver
[req-e6b5ed3c-9fb0-4c0b-a5b9-c8fdecea0cf0 admin
82e842f0f1b548c28f11be231744ee24] IPAM subnet referenced to Neutron subnet
a1ddebd1-e63b-41af-a1d3-d5611da46f7e does not exist

MariaDB [neutron]> select * from ipamsubnets;
Empty set (0.00 sec)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-09 Thread Shraddha Pandhe
Hi Carl,

Please find me reply inline


On Mon, Nov 9, 2015 at 9:49 AM, Carl Baldwin <c...@ecbaldwin.net> wrote:

> On Fri, Nov 6, 2015 at 2:59 PM, Shraddha Pandhe <
> spandhe.openst...@gmail.com> wrote:
>>
>> We have a similar requirement where we want to pick a network thats
>> accessible in the rack that VM belongs to. We have L3 Top-of-rack, so the
>> network is confined to the rack. Right now, we are achieving this by naming
>> physical network name in a certain way, but thats not going to scale.
>>
>> We also want to be able to make scheduling decisions based on IP
>> availability. So we need to know rack <-> network <-> mapping.  We can't
>> embed all factors in a name. It will be impossible to make scheduling
>> decisions by parsing name and comparing. GoDaddy has also been doing
>> something similar [1], [2].
>>
>
> This is precisely the use case that the large deployers team (LDT) has
> brought to Neutron [1].  In fact, GoDaddy has been at the forefront of that
> request.  We've had discussions about this since just after Vancouver on
> the ML.  I've put up several specs to address it [2] and I'm working
> another revision of it.  My take on it is that Neutron needs a model for a
> layer 3 network (IpNetwork) which would group the rack networks.  The
> IpNetwork would be visible to the end user and there will be a network <->
> host mapping.  I am still aiming to have working code for this in Mitaka.
> I discussed this with the LDT in Tokyo and they seemed to agree.  We had a
> session on this in the Neutron design track [3][4] though that discussion
> didn't produce anything actionable.
>
> Thats great. L3 layer network model is definitely one of our most
important requirements. All our go-forward deployments are going to be L3.
So this is a big deal for us.


> Solving this problem at the IPAM level has come up in discussion but I
> don't have any references for that.  It is something that I'm still
> considering but I haven't worked out all of the details for how this can
> work in a portable way.  Could you describe how you imagine how this flow
> would work from a user's perspective?  Specifically, when a user wants to
> boot a VM, what precise API calls would be made to achieve this on your
> network and how where would the IPAM data come in to play?
>

Here's what the flow looks like to me.

1. User sends a boot request as usual. The user need not know all the
network and subnet information beforehand. All he would do is send a boot
request.

2. The scheduler will pick a node in an L3 rack. The way we map nodes <->
racks is as follows:
a. For VMs, we store rack_id in nova.conf on compute nodes
b. For Ironic nodes, right now we have static IP allocation, so we
practically know which IP we want to assign. But when we move to dynamic
allocation, we would probably use 'chassis' or 'driver_info' fields to
store the rack id.

3. Nova compute will try to pick a network ID for this instance.  At this
point, it needs to know what networks (or subnets) are available in this
rack. Based on that, it will pick a network ID and send port creation
request to Neutron. At Yahoo, to avoid some back-and-forth, we send a fake
network_id and let the plugin do all the work.

4. We need some information associated with the network/subnet that tells
us what rack it belongs to. Right now, for VMs, we have that information
embedded in physnet name. But we would like to move away from that. If we
had a column for subnets - e.g. tag, it would solve our problem. Ideally,
we would like a column 'rack id' or a new table 'racks' that maps to
subnets, or something. We are open to different ideas that work for
everyone. This is where IPAM can help.

5. We have another requirement where we want to store multiple gateway
addresses for a subnet, just like name servers.


We also have a requirement where we want to make scheduling decisions based
on IP availability. We want to allocate multiple IPs to the hosts. e.g. We
want to allocate X IPs to a host. The flow in that case would be

1. User sends a boot request with --num-ips X
The network/subnet level complexities need not be exposed to the user.
For better experience, all we want our users to tell us is the number of
IPs they want.

2. When the scheduler tries to find an appropriate host in L3 racks, we
want it to find a rack that can satisfy this IP requirement. So, the
scheduler will basically say, "give me all racks that have >X IPs
available". If we have a 'Racks' table in IPAM, that would help.
Once the scheduler gets a rack, it will apply remaining filters to
narrow down to one host and call nova-compute. The IP count will be
propagated to nova compute from scheduler.


3. Nova compute will call Neutron and send the node details and IP count
along. Neutro

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-09 Thread Shraddha Pandhe
Gary,

I agree. Moving away from that option, I am trying to propose the idea of
extended IPAM tables: https://bugs.launchpad.net/neutron/+bug/1513981 and
https://review.openstack.org/#/c/242688/

On Sun, Nov 8, 2015 at 12:10 AM, Gary Kotton <gkot...@vmware.com> wrote:

> I think that id we can move to a versioned object model model then it will
> be better. Having random json blobs passed around is going to cause
> problems.
>
> From: "Armando M." <arma...@gmail.com>
> Reply-To: OpenStack List <openstack-dev@lists.openstack.org>
> Date: Wednesday, November 4, 2015 at 11:38 PM
> To: OpenStack List <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam
> db tables
>
>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other projects already do things like
>> this.
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>> 'extras' arbitrary JSON field. This allows us to put any information in
>> there that we think is important for us.
>>
>
> I personally feel that relying on json blobs is not only dangerously
> affecting portability, but it causes us to bloat the business logic, and
> forcing us to be doing less efficient when querying/filtering data.
>
> Most importantly though, I feel it's like abdicating our responsibility to
> do a good design job. Ultimately, we should be able to identify how to
> model these extensions you're thinking of both conceptually and logically.
>
> I couldn't care less if other projects use it, but we ended up using in
> Neutron too, and since I lost this battle time and time again, all I am
> left with is this rant :)
>
>
>>
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>
>>
>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando <salv.orla...@gmail.com
>> > wrote:
>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for versioning or
>>> portability purposes.
>>> The parameters that should end up in such blob are typically specific
>>> for the target IPAM driver (to an extent they might even identify a
>>> specific driver to use), and therefore an API consumer who knows what
>>> backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability and
>>> not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more input
>>> on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension are
>>> not a solution - assuming your granularity level is the allocation pool;
>>> indeed allocation pools are not first-class neutron resources, and it is
>>> not therefore possible to have APIs which associate vendor specific
>>> properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
>>>> Hi folks,
>>>>
>>>> I have a small question/suggestion about IPAM.
>>>>
>>>> With IPAM, we are allowing users to have their own IPAM drivers so that
>>>> they can manage IP allocation. The problem is, the new ipam tables in the
>>>> database have the same columns as the old tables. So, as a user, if I want
>>>> to have my own logic for ip allocation, I can't actually get any help from
>>>> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
>>>> I could put any useful information/tags there, that can help me for
>>>> allocation.
>>>>
>>>> Does this make sense?
>>>>
>>>> e.g. If I want to create multiple allocation pools in a subnet and use
>>>> them for different purposes, I would need some sort of tag for each
>>>> allocation pool for identification. Right now, there is no scope for doing
>>>> something like that.
>>>>
>>>> Any thoughts? If there are any other way to solve the problem, please
>>>> let me know
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ___

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Shraddha Pandhe
Replies inline.


On Fri, Nov 6, 2015 at 1:48 PM, Salvatore Orlando <salv.orla...@gmail.com>
wrote:

> More comments inline.
> I shall stop trying being ironic (pun intended) in my posts.
>

:(


>
> Salvatore
>
> On 5 November 2015 at 18:37, Kyle Mestery <mest...@mestery.com> wrote:
>
>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes <jaypi...@gmail.com> wrote:
>>
>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>
>>>> Hi Salvatore,
>>>>
>>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>>> make IPAM much more powerful. Some other projects already do things like
>>>> this.
>>>>
>>>
>>> :( Actually, though "powerful" it also leads to implementation details
>>> leaking directly out of the public REST API. I'm very negative on this and
>>> would prefer an actual codified REST API that can be relied on regardless
>>> of backend driver or implementation.
>>>
>>
>> I agree with Jay here. We've had people propose similar things in Neutron
>> before, and I've been against them. The entire point of the Neutron REST
>> API is to not leak these details out. It dampens the strength of the
>> logical model, and it tends to have users become reliant on backend
>> implementations.
>>
>
> I see I did not manage to convey accurately irony and sarcasm in my
> previous post ;)
> The point was that thanks to a blooming number of extensions the Neutron
> API is already hardly portable. Blob attributes (or dict attributes, or
> key/value list attributes, or whatever does not have a precise schema) are
> a nail in the coffin, and also violate the only tenet Neutron has somehow
> managed to honour, which is being backend agnostic.
> And the fact that the port binding extension is pretty much that is not a
> valid argument, imho.
> On the other hand, I'm all in for extending DB schema and driver logic to
> suit all IPAM needs; at the end of the day that's what do with plugins for
> all sort of stuff.
>


Agreed. Filed an rfe bug: https://bugs.launchpad.net/neutron/+bug/1513981.
Spec coming up for review.



>
>
>
>>
>>
>>>
>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>>> there that we think is important for us.
>>>>
>>>
>>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>>> structured, not a Wild West free-for-all. The biggest problem with using
>>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>>> ability to evolve the API in a structured, versioned way. Instead of
>>> evolving the API using microversions, instead every vendor just jams
>>> whatever they feel like into the JSON blob over time. There's no way for
>>> clients to know what the server will return at any given time.
>>>
>>> Achieving consensus on a REST API that meets the needs of a variety of
>>> backend implementations is *hard work*, yes, but it's what we need to do if
>>> we are to have APIs that are viewed in the industry as stable,
>>> discoverable, and reliably useful.
>>>
>>
>> ++, this is the correct way forward.
>>
>
> Cool, but let me point out that experience has thought us that anything
> that is a result of a compromise between several parties following
> different agendas is bound to failure as it does not fully satisfy the
> requirements of any stakeholder.
> If these information are needed for making scheduling decisions based on
> network requirements, then it makes sense to expose this information also
> at the API layer (I assume there also plans for making the scheduler
> *seriously* network aware). However, this information should have a
> well-defined schema with no leeway for 'extensions; such schema can evolve
> over time.
>
>
>> Thanks,
>> Kyle
>>
>>
>>>
>>> Best,
>>> -jay
>>>
>>> Best,
>>> -jay
>>>
>>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>>
>>>>
>>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>>>> <salv.orla...@gmail.com <mailto:salv.orla...@gmail.com>> wrote:
>>>>
>>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>>> API, as well as other constraints which might be imposed for
>>>> versioning or portability purposes.
>>>>

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Shraddha Pandhe
Bumping this up :)


Folks, does anyone else have a similar requirement to ours? Are folks
making scheduling decisions based on networking?



On Thu, Nov 5, 2015 at 12:24 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:

> Hi,
>
> I agree with all of you about the REST Apis.
>
> As I said before, I had to bring up the idea of JSON blob because based on
> previous discussions, it looked like neutron community was not willing to
> enhance the schemas for different ipam dbs. Entire rationale behind
> pluggable IPAM is to provide flexibility. So, community should be open to
> ideas for enhancing the schema to incorporate more information in the db
> tables. I would be extremely happy if use cases for different companies are
> considered and schema is enhanced to include specific columns in db
>  schemas instead of a column with random JSON blob.
>
> Lets pick up subnets db table for example. We have some use cases where it
> would be great if following information is associated with the subnet db
> table
>
> 1. Rack switch info
> 2. Backplane info
> 3. DHCP ip helpers
> 4. Option to tag allocation pools inside subnets
> 5. Multiple gateway addresses
>
> We also want to store some information about the backplanes locally, so a
> different table might be useful.
>
> In a way, this information is not specific to our company. Its generic
> information which ought to go with the subnets. Different companies can use
> this information differently in their IPAM drivers. But, the information
> needs to be made available to justify the flexibility of ipam
>
> In Yahoo! OpenStack is still not the source of truth for this kind of
> information and database limitation is one of the reasons. I would prefer
> to avoid having our own database to make sure that our use-cases are always
> shared with the community.
>
>
>
>
>
>
>
>
> On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery <mest...@mestery.com> wrote:
>
>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes <jaypi...@gmail.com> wrote:
>>
>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>
>>>> Hi Salvatore,
>>>>
>>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>>> make IPAM much more powerful. Some other projects already do things like
>>>> this.
>>>>
>>>
>>> :( Actually, though "powerful" it also leads to implementation details
>>> leaking directly out of the public REST API. I'm very negative on this and
>>> would prefer an actual codified REST API that can be relied on regardless
>>> of backend driver or implementation.
>>>
>>
>> I agree with Jay here. We've had people propose similar things in Neutron
>> before, and I've been against them. The entire point of the Neutron REST
>> API is to not leak these details out. It dampens the strength of the
>> logical model, and it tends to have users become reliant on backend
>> implementations.
>>
>>
>>>
>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>>> there that we think is important for us.
>>>>
>>>
>>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>>> structured, not a Wild West free-for-all. The biggest problem with using
>>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>>> ability to evolve the API in a structured, versioned way. Instead of
>>> evolving the API using microversions, instead every vendor just jams
>>> whatever they feel like into the JSON blob over time. There's no way for
>>> clients to know what the server will return at any given time.
>>>
>>> Achieving consensus on a REST API that meets the needs of a variety of
>>> backend implementations is *hard work*, yes, but it's what we need to do if
>>> we are to have APIs that are viewed in the industry as stable,
>>> discoverable, and reliably useful.
>>>
>>
>> ++, this is the correct way forward.
>>
>> Thanks,
>> Kyle
>>
>>
>>>
>>> Best,
>>> -jay
>>>
>>> Best,
>>> -jay
>>>
>>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>>
>>>>
>>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>>>> <salv.orla...@gmail.com <mailto:salv.orla...@gmail.com>> wrote:
>>>>
>>>> Arbitrary blobs are a powerful tools to circumvent 

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Shraddha Pandhe
Hi Neil,

Please find my reply inline.

On Fri, Nov 6, 2015 at 1:08 PM, Neil Jerram <neil.jer...@metaswitch.com>
wrote:

> Yes, maybe. I'm interested in a pluggable IPAM module that will allocate
> an IP address for a VM that depends on where that VM's host is‎ in the
> physical data center network. Is that similar to your requirement?
>
> We have a similar requirement where we want to pick a network thats
accessible in the rack that VM belongs to. We have L3 Top-of-rack, so the
network is confined to the rack. Right now, we are achieving this by naming
physical network name in a certain way, but thats not going to scale.

We also want to be able to make scheduling decisions based on IP
availability. So we need to know rack <-> network <-> mapping.  We can't
embed all factors in a name. It will be impossible to make scheduling
decisions by parsing name and comparing. GoDaddy has also been doing
something similar [1], [2].


> I don't yet know whether that might lead me to want to store additional
> data in the Neutron DB. My intuition though is that it shouldn't, and that
> any additional data or state that I need for this IPAM module should be
> stored separately from the Neutron DB.
>

 Where are you planning to store that information? If we need similar
information, and if more folks need it, we can add it to Neutron DB in IPAM
tables.

[1]
http://www.dorm.org/blog/openstack-architecture-at-go-daddy-part-3-nova/#Scheduler_Customization
[2]
http://www.dorm.org/blog/openstack-architecture-at-go-daddy-part-2-neutron/#Customizations_to_Abstract_Away_Layer_2


> Regards,
>Neil
>
>


>
> *From: *Shraddha Pandhe
> *Sent: *Friday, 6 November 2015 20:23
> *To: *OpenStack Development Mailing List (not for usage questions)
> *Reply To: *OpenStack Development Mailing List (not for usage questions)
> *Subject: *Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in
> ipam db tables
>
> Bumping this up :)
>
>
> Folks, does anyone else have a similar requirement to ours? Are folks
> making scheduling decisions based on networking?
>
>
>
> On Thu, Nov 5, 2015 at 12:24 PM, Shraddha Pandhe <
> spandhe.openst...@gmail.com> wrote:
>
>> Hi,
>>
>> I agree with all of you about the REST Apis.
>>
>> As I said before, I had to bring up the idea of JSON blob because based
>> on previous discussions, it looked like neutron community was not willing
>> to enhance the schemas for different ipam dbs. Entire rationale behind
>> pluggable IPAM is to provide flexibility. So, community should be open to
>> ideas for enhancing the schema to incorporate more information in the db
>> tables. I would be extremely happy if use cases for different companies are
>> considered and schema is enhanced to include specific columns in db
>>  schemas instead of a column with random JSON blob.
>>
>> Lets pick up subnets db table for example. We have some use cases where
>> it would be great if following information is associated with the subnet db
>> table
>>
>> 1. Rack switch info
>> 2. Backplane info
>> 3. DHCP ip helpers
>> 4. Option to tag allocation pools inside subnets
>> 5. Multiple gateway addresses
>>
>> We also want to store some information about the backplanes locally, so a
>> different table might be useful.
>>
>> In a way, this information is not specific to our company. Its generic
>> information which ought to go with the subnets. Different companies can use
>> this information differently in their IPAM drivers. But, the information
>> needs to be made available to justify the flexibility of ipam
>>
>> In Yahoo! OpenStack is still not the source of truth for this kind of
>> information and database limitation is one of the reasons. I would prefer
>> to avoid having our own database to make sure that our use-cases are always
>> shared with the community.
>>
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery <mest...@mestery.com> wrote:
>>
>>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes <jaypi...@gmail.com> wrote:
>>>
>>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>>
>>>>> Hi Salvatore,
>>>>>
>>>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs
>>>>> will
>>>>> make IPAM much more powerful. Some other projects already do things
>>>>> like
>>>>> this.
>>>>>
>>>>
>>>> :( Actually, though "powerful" it also leads to implementation details
>>>> leaking directly out of the public

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Shraddha Pandhe
Hi,

I agree with all of you about the REST Apis.

As I said before, I had to bring up the idea of JSON blob because based on
previous discussions, it looked like neutron community was not willing to
enhance the schemas for different ipam dbs. Entire rationale behind
pluggable IPAM is to provide flexibility. So, community should be open to
ideas for enhancing the schema to incorporate more information in the db
tables. I would be extremely happy if use cases for different companies are
considered and schema is enhanced to include specific columns in db
 schemas instead of a column with random JSON blob.

Lets pick up subnets db table for example. We have some use cases where it
would be great if following information is associated with the subnet db
table

1. Rack switch info
2. Backplane info
3. DHCP ip helpers
4. Option to tag allocation pools inside subnets
5. Multiple gateway addresses

We also want to store some information about the backplanes locally, so a
different table might be useful.

In a way, this information is not specific to our company. Its generic
information which ought to go with the subnets. Different companies can use
this information differently in their IPAM drivers. But, the information
needs to be made available to justify the flexibility of ipam

In Yahoo! OpenStack is still not the source of truth for this kind of
information and database limitation is one of the reasons. I would prefer
to avoid having our own database to make sure that our use-cases are always
shared with the community.








On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery <mest...@mestery.com> wrote:

> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes <jaypi...@gmail.com> wrote:
>
>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>> make IPAM much more powerful. Some other projects already do things like
>>> this.
>>>
>>
>> :( Actually, though "powerful" it also leads to implementation details
>> leaking directly out of the public REST API. I'm very negative on this and
>> would prefer an actual codified REST API that can be relied on regardless
>> of backend driver or implementation.
>>
>
> I agree with Jay here. We've had people propose similar things in Neutron
> before, and I've been against them. The entire point of the Neutron REST
> API is to not leak these details out. It dampens the strength of the
> logical model, and it tends to have users become reliant on backend
> implementations.
>
>
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>> there that we think is important for us.
>>>
>>
>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>> structured, not a Wild West free-for-all. The biggest problem with using
>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>> ability to evolve the API in a structured, versioned way. Instead of
>> evolving the API using microversions, instead every vendor just jams
>> whatever they feel like into the JSON blob over time. There's no way for
>> clients to know what the server will return at any given time.
>>
>> Achieving consensus on a REST API that meets the needs of a variety of
>> backend implementations is *hard work*, yes, but it's what we need to do if
>> we are to have APIs that are viewed in the industry as stable,
>> discoverable, and reliably useful.
>>
>
> ++, this is the correct way forward.
>
> Thanks,
> Kyle
>
>
>>
>> Best,
>> -jay
>>
>> Best,
>> -jay
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>>> <salv.orla...@gmail.com <mailto:salv.orla...@gmail.com>> wrote:
>>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for
>>> versioning or portability purposes.
>>> The parameters that should end up in such blob are typically
>>> specific for the target IPAM driver (to an extent they might even
>>> identify a specific driver to use), and therefore an API consumer
>>> who knows what backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability
>>> and not leaking backend details are not a concern.
>>> The Neutron team API & DB lie

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
On Wed, Nov 4, 2015 at 1:38 PM, Armando M. <arma...@gmail.com> wrote:

>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other projects already do things like
>> this.
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>> 'extras' arbitrary JSON field. This allows us to put any information in
>> there that we think is important for us.
>>
>
> I personally feel that relying on json blobs is not only dangerously
> affecting portability, but it causes us to bloat the business logic, and
> forcing us to be doing less efficient when querying/filtering data
>

> Most importantly though, I feel it's like abdicating our responsibility to
> do a good design job.
>


How does it affect portability?

I don't think it forces us to do anything. 'Allows'? Maybe. But that can be
solved. Before making any design decisions for internal feature-requests,
we should first check with the community if its a wider use-case. If it is
a wider use-case, we should collaborate and fix it upstream the right way.

I feel that, its impossible for the community to know all the use-cases.
Even if they knew, it would be impossible to incorporate all of them. I
filed a bug few months ago about multiple gateway support for subnets.

https://bugs.launchpad.net/neutron/+bug/1464361

It was marked as 'Wont fix' because nobody else had this use-case. Adding
and maintaining a patch to support this is super risky as it breaks the
APIs. A JSON blob would have helped me here.

I have another use-case. For multi-ip support for Ironic, we want to divide
the IP allocation ranges into two: Static IPs and extra IPs. The static IPs
are pre-configured IPs for Ironic inventory whereas extra IPs are the
multi-ips. Nobody else in the community has this use-case.

If we add our own database for internal stuff, we go back to the same
problem of allowing bad design.



> Ultimately, we should be able to identify how to model these extensions
> you're thinking of both conceptually and logically.
>

I would agree with that. If theres an effort going on in this direction,
ill be happy to join. Without this, people like us with unique use-cases
are stuck with having patches.



>
> I couldn't care less if other projects use it, but we ended up using in
> Neutron too, and since I lost this battle time and time again, all I am
> left with is this rant :)
>
>
>>
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>
>>
>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando <salv.orla...@gmail.com
>> > wrote:
>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for versioning or
>>> portability purposes.
>>> The parameters that should end up in such blob are typically specific
>>> for the target IPAM driver (to an extent they might even identify a
>>> specific driver to use), and therefore an API consumer who knows what
>>> backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability and
>>> not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more input
>>> on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension are
>>> not a solution - assuming your granularity level is the allocation pool;
>>> indeed allocation pools are not first-class neutron resources, and it is
>>> not therefore possible to have APIs which associate vendor specific
>>> properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
>>>> Hi folks,
>>>>
>>>> I have a small question/suggestion about IPAM.
>>>>
>>>> With IPAM, we are allowing users to have their own IPAM drivers so that
>>>> they can manage IP allocation. The problem is, the new ipam tables in the
>>>> database have the same columns as the old tables. So, as a user, if I want
>>>> to have my own logic for ip allocation, I can't actually get any help from
>>>> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
>>>> I could put any useful information/tags there, that can help me for
>>>> allocation.
>>>>
>>&

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary JSON blobs will
make IPAM much more powerful. Some other projects already do things like
this.

e.g. In Ironic, node has driver_info, which is JSON. it also has an
'extras' arbitrary JSON field. This allows us to put any information in
there that we think is important for us.


Hoping to get some positive feedback from API and DB lieutenants too.


On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando <salv.orla...@gmail.com>
wrote:

> Arbitrary blobs are a powerful tools to circumvent limitations of an API,
> as well as other constraints which might be imposed for versioning or
> portability purposes.
> The parameters that should end up in such blob are typically specific for
> the target IPAM driver (to an extent they might even identify a specific
> driver to use), and therefore an API consumer who knows what backend is
> performing IPAM can surely leverage it.
>
> Therefore this would make a lot of sense, assuming API portability and not
> leaking backend details are not a concern.
> The Neutron team API & DB lieutenants will be able to provide more input
> on this regard.
>
> In this case other approaches such as a vendor specific extension are not
> a solution - assuming your granularity level is the allocation pool; indeed
> allocation pools are not first-class neutron resources, and it is not
> therefore possible to have APIs which associate vendor specific properties
> to allocation pools.
>
> Salvatore
>
> On 4 November 2015 at 21:46, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
>> Hi folks,
>>
>> I have a small question/suggestion about IPAM.
>>
>> With IPAM, we are allowing users to have their own IPAM drivers so that
>> they can manage IP allocation. The problem is, the new ipam tables in the
>> database have the same columns as the old tables. So, as a user, if I want
>> to have my own logic for ip allocation, I can't actually get any help from
>> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
>> I could put any useful information/tags there, that can help me for
>> allocation.
>>
>> Does this make sense?
>>
>> e.g. If I want to create multiple allocation pools in a subnet and use
>> them for different purposes, I would need some sort of tag for each
>> allocation pool for identification. Right now, there is no scope for doing
>> something like that.
>>
>> Any thoughts? If there are any other way to solve the problem, please let
>> me know
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
Hi folks,

I have a small question/suggestion about IPAM.

With IPAM, we are allowing users to have their own IPAM drivers so that
they can manage IP allocation. The problem is, the new ipam tables in the
database have the same columns as the old tables. So, as a user, if I want
to have my own logic for ip allocation, I can't actually get any help from
the database. Whereas, if we had an arbitrary json blob in the ipam tables,
I could put any useful information/tags there, that can help me for
allocation.

Does this make sense?

e.g. If I want to create multiple allocation pools in a subnet and use them
for different purposes, I would need some sort of tag for each allocation
pool for identification. Right now, there is no scope for doing something
like that.

Any thoughts? If there are any other way to solve the problem, please let
me know
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
On Wed, Nov 4, 2015 at 3:12 PM, Kevin Benton <blak...@gmail.com> wrote:

> >If we add our own database for internal stuff, we go back to the same
> problem of allowing bad design.
>
> I'm not sure I understand what you are saying here. A JSON blob that only
> one driver knows how to interpret is worse than a vendor table.
>
I am only talking about the IPAM tables here. The reference implementation
doesn't need to play with JSON blob at all. Rather I would say, it
shouldn't. It can be left up to the vendors/users to manage that blob
responsibly. I can create my own database and point my IPAM module to that,
but then IPAM tables are practically useless for me. The only reason for
suggesting the blob is flexibility, which is the main reason for
pluggability of IPAM.


> They both are specific to one driver but at least with a vendor table you
> can have DB migrations, integrity, column queries, etc. Additionally, the
> vendor table with extra features exposed via an API extension makes it more
> clear to the API caller what is vendor specific.
>

I agree that thats a huge advantage of having a db. But sometimes, it may
not be absolutely necessary to have an extra DB.

e.g. For multiple gateways support, a separate database would probably add
more overhead than required. All I want is to be able to fetch those IPs.

The user can take a responsible decision whether to use the blob or the
database depending on the requirement, if they have the flexibility.

Can you elaborate what you mean by bad design?
>
When we are working on internal features, we have to follow different
timelines. Having an arbitrary blob can sometimes make us use that by
default, especially under pressing deadlines, instead of consulting with
broader audience and finding the right solution.



> On Nov 4, 2015 3:58 PM, "Shraddha Pandhe" <spandhe.openst...@gmail.com>
> wrote:
>
>>
>>
>> On Wed, Nov 4, 2015 at 1:38 PM, Armando M. <arma...@gmail.com> wrote:
>>
>>>
>>>
>>> On 4 November 2015 at 13:21, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
>>>> Hi Salvatore,
>>>>
>>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs
>>>> will make IPAM much more powerful. Some other projects already do things
>>>> like this.
>>>>
>>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>>> there that we think is important for us.
>>>>
>>>
>>> I personally feel that relying on json blobs is not only dangerously
>>> affecting portability, but it causes us to bloat the business logic, and
>>> forcing us to be doing less efficient when querying/filtering data
>>>
>>
>>> Most importantly though, I feel it's like abdicating our responsibility
>>> to do a good design job.
>>>
>>
>>
>> How does it affect portability?
>>
>> I don't think it forces us to do anything. 'Allows'? Maybe. But that can
>> be solved. Before making any design decisions for internal
>> feature-requests, we should first check with the community if its a wider
>> use-case. If it is a wider use-case, we should collaborate and fix it
>> upstream the right way.
>>
>> I feel that, its impossible for the community to know all the use-cases.
>> Even if they knew, it would be impossible to incorporate all of them. I
>> filed a bug few months ago about multiple gateway support for subnets.
>>
>> https://bugs.launchpad.net/neutron/+bug/1464361
>>
>> It was marked as 'Wont fix' because nobody else had this use-case. Adding
>> and maintaining a patch to support this is super risky as it breaks the
>> APIs. A JSON blob would have helped me here.
>>
>> I have another use-case. For multi-ip support for Ironic, we want to
>> divide the IP allocation ranges into two: Static IPs and extra IPs. The
>> static IPs are pre-configured IPs for Ironic inventory whereas extra IPs
>> are the multi-ips. Nobody else in the community has this use-case.
>>
>> If we add our own database for internal stuff, we go back to the same
>> problem of allowing bad design.
>>
>>
>>
>>> Ultimately, we should be able to identify how to model these extensions
>>> you're thinking of both conceptually and logically.
>>>
>>
>> I would agree with that. If theres an effort going on in this direction,
>> ill be happy to join. Without this, people like us with unique use-cases
>> are stuck with having patches.
>>
>>
>>
>&

[openstack-dev] [Neutron] Multi-ip use cases

2015-11-02 Thread Shraddha Pandhe
Hi folks,

James Penick from Yahoo! presented a talk on Thursday about how Yahoo uses
Neutron for Ironic. I would like to follow up on one particular use case
that was discussed: Multi-IP support.

Here's our use-case for Multi-ip:

For Ironic, we want user to specify the number of IPs on boot. Now, we want
scheduler to find a network with sufficient IPs and then pick a host in
that subnet (Note: static IPs for baremetal). Now, when allocating network,
we want to assign all requested IPs from the same subnet as the host's
static IP. Also, we don't want to configure those IPs on the host. We only
want to display them on "nova show".

So basically we will only create one port for the host, and append all
fixed_ips in the ports fixed_ip list field.

Questions:
1. Hows do most people use the fixed_ips field in the port? What are
different ways you can populate multiple IPs in fixed_ips? One way I know
is, using neutron-client to create port, you can specify --fixed-ip as many
times as you want, and that will append fixed_ips to the port. Any other
way?

2. Is anybody else using multi-ip?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Multi-ip use cases

2015-10-30 Thread Shraddha Pandhe
Hi folks,

James Penick from Yahoo! presented a talk on Thursday about how Yahoo uses
Neutron for Ironic. I would like to follow up on one particular use case
that was discussed: Multi-IP support.

Here's our use-case for Multi-ip:

For Ironic, we want user to specify the number of IPs on boot. Now, we want
scheduler to find a network with sufficient IPs and then pick a host in
that subnet (Note: static IPs for baremetal). Now, when allocating network,
we want to assign all requested IPs from the same subnet as the host's
static IP. Also, we don't want to configure those IPs on the host. We only
want to display them on "nova show".

So basically we will only create one port for the host, and append all
fixed_ips in the ports fixed_ip list field.

Questions:
1. Hows do most people use the fixed_ips field in the port? What are
different ways you can populate multiple IPs in fixed_ips? One way I know
is, using neutron-client to create port, you can specify --fixed-ip as many
times as you want, and that will append fixed_ips to the port. Any other
way?

2. Is anybody else using multi-ip?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Shraddha Pandhe
Hi Ionut,

I am working on a similar effort: Adding driver for neutron-dhcp-agent [1]
& [2]. Is it similar to what you are trying to do? My approach doesn't need
any extra database. There are two ways to achieve HA in my case:

1. Run multiple neutron-dhcp-agents and set agents_per_network >1 so more
than one dhcp servers will have the config needed to serve the dhcp request
2. ISC-DHCPD itself has some HA where you can setup peers. But I haven't
tried that yet.

I have this driver fully implemented and working here at Yahoo!. Working on
making it more generic and upstreaming it. Please let me know if this
effort is similar so that we can consider working together on a single
effort.



[1] https://review.openstack.org/#/c/212836/
[2] https://bugs.launchpad.net/neutron/+bug/1464793

On Thu, Sep 24, 2015 at 9:40 AM, Dmitry Tantsur 
wrote:

> 2015-09-24 17:38 GMT+02:00 Ionut Balutoiu <
> ibalut...@cloudbasesolutions.com>:
>
>> Hello, guys!
>>
>> I'm starting a new implementation for a dhcp provider,
>> mainly to be used for Ironic standalone. I'm planning to
>> push it upstream. I'm using isc-dhcp-server service from
>> Linux. So, when an Ironic node is started, the ironic-conductor
>> writes in the config file the MAC-IP reservation for that node and
>> reloads dhcp service. I'm using a SQL database as a backend to store
>> the dhcp reservations (I think is cleaner and it should allow us
>> to have more than one DHCP server). What do you think about my
>> implementation ?
>>
>
> What you describe slightly resembles how ironic-inspector works. It needs
> to serve DHCP to nodes that are NOT know to Ironic, so it manages iptables
> rules giving (or not giving access) to the dnsmasq instance. I wonder if we
> may find some common code between these 2, but I definitely don't want to
> reinvent Neutron :) I'll think about it after seeing your spec and/or code,
> I'm already looking forward to them!
>
>
>> Also, I'm not sure how can I scale this out to provide HA/failover.
>> Do you guys have any idea ?
>>
>> Regards,
>> Ionut Balutoiu
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> --
> -- Dmitry Tantsur
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with neutron-dhcp-agent not recovering known ports cache after restart

2015-07-28 Thread Shraddha Pandhe
Hi,

I started working on this patch for bug [0], but it doesn't seem as
straightforward as Kevin and I initially thought. Here's why:

Initially we thought that a simple self.cache.push_port will work as
follows:

In __init__ [1], in _populate_network_cache [2], get all existing active
networks from neutron using plugin_rpc similar to [3]. Then, for each of
those networks, go through all the subnets and ports and add them in [4]
before updating the cache.

But I realized that this will not work because, if we try to get all active
subnets and ports and populate those in cache, we may miss the delta.
Consider following scenario:

Cache state before agent stopped:
networks 1
subnets 4
ports 10

While the agent was down:
networks 1
subnets 6
ports 12

Now, then the agent comes back up, if we follow the above method, the
network cache will immediately populate itself to the new state, without
actually creating those subnets and ports in dhcp config. So if we want to
follow this route, we need a mechanism to get known_subnet_ids and
known_port_ids from dhcp driver, just like we get known_network_ids at [5]
[6]. To me, that seems too much work for what it does.

Any better ideas to solve this problem?



[0] https://bugs.launchpad.net/neutron/+bug/1470612
[1]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L54
[2]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L75
[3]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L156
[4]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L82-L85
[5]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L78
[6]
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L293-L302

On Wed, Jul 1, 2015 at 11:28 AM, Shraddha Pandhe 
spandhe.openst...@gmail.com wrote:

 Hi,

 I had a discussion about this with Kevin Benton on IRC. Filed a bug:
 https://bugs.launchpad.net/neutron/+bug/1470612

 Thanks!


 On Wed, Jul 1, 2015 at 11:03 AM, Shraddha Pandhe 
 spandhe.openst...@gmail.com wrote:

 Hi Shihan,

 I think the problem is slightly different. Does your patch take care of
 the scenario where a port was deleted  AFTER agent restart (not when agent
 was down)?

 My problem is that, when the agent restarts, it loses its previous
 network cache. As soon as the agent starts, as part of __init__, it
 rebuilds that cache [1]. But it does not put the ports in there [2].

 In sync_state, Neutron tries to enable/disable networks, by checking the
 diff between Neutron's state and its own network cache that it just built
 [3]. It enables any NEW networks and disables any DELETED networks, but it
 does nothing to PREVIOUSLY KNOWN NETWORKS. So those subnets and ports
 remain empty lists.

 Now, if such a port is deleted, [4] will return None and the port will
 never get deleted from the config.

 [1]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L68
 [2]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L79-L86
 [3]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L154-L171
 [4]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with neutron-dhcp-agent not recovering known ports cache after restart

2015-07-28 Thread Shraddha Pandhe
Hi,

I have added few more questions to the bug [1]. Please confirm my
understanding.


[1] https://bugs.launchpad.net/neutron/+bug/1470612/comments/12

On Tue, Jul 28, 2015 at 12:14 PM, Shraddha Pandhe 
spandhe.openst...@gmail.com wrote:

 Hi,

 I started working on this patch for bug [0], but it doesn't seem as
 straightforward as Kevin and I initially thought. Here's why:

 Initially we thought that a simple self.cache.push_port will work as
 follows:

 In __init__ [1], in _populate_network_cache [2], get all existing active
 networks from neutron using plugin_rpc similar to [3]. Then, for each of
 those networks, go through all the subnets and ports and add them in [4]
 before updating the cache.

 But I realized that this will not work because, if we try to get all
 active subnets and ports and populate those in cache, we may miss the
 delta. Consider following scenario:

 Cache state before agent stopped:
 networks 1
 subnets 4
 ports 10

 While the agent was down:
 networks 1
 subnets 6
 ports 12

 Now, then the agent comes back up, if we follow the above method, the
 network cache will immediately populate itself to the new state, without
 actually creating those subnets and ports in dhcp config. So if we want to
 follow this route, we need a mechanism to get known_subnet_ids and
 known_port_ids from dhcp driver, just like we get known_network_ids at [5]
 [6]. To me, that seems too much work for what it does.

 Any better ideas to solve this problem?



 [0] https://bugs.launchpad.net/neutron/+bug/1470612
 [1]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L54
 [2]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L75
 [3]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L156
 [4]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L82-L85
 [5]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L78
 [6]
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L293-L302

 On Wed, Jul 1, 2015 at 11:28 AM, Shraddha Pandhe 
 spandhe.openst...@gmail.com wrote:

 Hi,

 I had a discussion about this with Kevin Benton on IRC. Filed a bug:
 https://bugs.launchpad.net/neutron/+bug/1470612

 Thanks!


 On Wed, Jul 1, 2015 at 11:03 AM, Shraddha Pandhe 
 spandhe.openst...@gmail.com wrote:

 Hi Shihan,

 I think the problem is slightly different. Does your patch take care of
 the scenario where a port was deleted  AFTER agent restart (not when agent
 was down)?

 My problem is that, when the agent restarts, it loses its previous
 network cache. As soon as the agent starts, as part of __init__, it
 rebuilds that cache [1]. But it does not put the ports in there [2].

 In sync_state, Neutron tries to enable/disable networks, by checking the
 diff between Neutron's state and its own network cache that it just built
 [3]. It enables any NEW networks and disables any DELETED networks, but it
 does nothing to PREVIOUSLY KNOWN NETWORKS. So those subnets and ports
 remain empty lists.

 Now, if such a port is deleted, [4] will return None and the port will
 never get deleted from the config.

 [1]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L68
 [2]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L79-L86
 [3]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L154-L171
 [4]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with neutron-dhcp-agent not recovering known ports cache after restart

2015-07-01 Thread Shraddha Pandhe
Hi Shihan,

I think the problem is slightly different. Does your patch take care of the
scenario where a port was deleted  AFTER agent restart (not when agent was
down)?

My problem is that, when the agent restarts, it loses its previous network
cache. As soon as the agent starts, as part of __init__, it rebuilds that
cache [1]. But it does not put the ports in there [2].

In sync_state, Neutron tries to enable/disable networks, by checking the
diff between Neutron's state and its own network cache that it just built
[3]. It enables any NEW networks and disables any DELETED networks, but it
does nothing to PREVIOUSLY KNOWN NETWORKS. So those subnets and ports
remain empty lists.

Now, if such a port is deleted, [4] will return None and the port will
never get deleted from the config.

[1]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L68
[2]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L79-L86
[3]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L154-L171
[4]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with neutron-dhcp-agent not recovering known ports cache after restart

2015-07-01 Thread Shraddha Pandhe
Hi,

I had a discussion about this with Kevin Benton on IRC. Filed a bug:
https://bugs.launchpad.net/neutron/+bug/1470612

Thanks!


On Wed, Jul 1, 2015 at 11:03 AM, Shraddha Pandhe 
spandhe.openst...@gmail.com wrote:

 Hi Shihan,

 I think the problem is slightly different. Does your patch take care of
 the scenario where a port was deleted  AFTER agent restart (not when agent
 was down)?

 My problem is that, when the agent restarts, it loses its previous network
 cache. As soon as the agent starts, as part of __init__, it rebuilds that
 cache [1]. But it does not put the ports in there [2].

 In sync_state, Neutron tries to enable/disable networks, by checking the
 diff between Neutron's state and its own network cache that it just built
 [3]. It enables any NEW networks and disables any DELETED networks, but it
 does nothing to PREVIOUSLY KNOWN NETWORKS. So those subnets and ports
 remain empty lists.

 Now, if such a port is deleted, [4] will return None and the port will
 never get deleted from the config.

 [1]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L68
 [2]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L79-L86
 [3]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L154-L171
 [4]
 https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Issue with neutron-dhcp-agent not recovering known ports cache after restart

2015-06-30 Thread Shraddha Pandhe
Hi folks..

I have a question about neutron dhcp agent restart scenario. It seems like,
when the agent restarts, it recovers the known network IDs in cache, but we
don't recover the known ports [1].

So if a port that was present before agent restarted, is deleted after
agent restart, the agent wont have it in its cache. So port here [2] will
be None. So the port will actually never get deleted.

Same problem will happen if a port is updated. Has anyone seen these
issues? Am I missing something?

[1]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L82-L87
[2]
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L349
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-15 Thread Shraddha Pandhe
Hi Assaf, Kevin, I know we talked about this on IRC, but just want to close
this question on the thread, for rest of the community.

Yes, Assaf is correct.

Either via DHCP or Config Drive, we will make sure that different
VMs/baremetal nodes get different gateway addresses.




On Fri, Jun 12, 2015 at 2:34 PM, Assaf Muller amul...@redhat.com wrote:

 I think Shraddha was talking about the gateway IP the DHCP server will
 respond
 with. Different VMs will get different gateways.

 - Original Message -
  That logic is contained in the virtual machine. We have no control over
 that.
 
  On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe 
  spandhe.openst...@gmail.com  wrote:
 
 
 
  The idea is to round-robin between gateways by using some sort of mod
  operation
 
  So logically it can look something like:
 
  idx = len(gateways) % ip
  gateway = gateways[idx]
 
 
  This is just one idea. I am open to more ideas.
 
 
 
 
  On Thu, Jun 11, 2015 at 3:10 PM, Kevin Benton  blak...@gmail.com 
 wrote:
 
 
 
 
  What gateway address do you give to regular clients via dhcp when you
 have
  multiple?
 
 
  On Jun 11, 2015 12:29 PM, Shraddha Pandhe 
 spandhe.openst...@gmail.com 
  wrote:
  
   Hi,
   Currently, the Subnets in Neutron and Nova-Network only support one
   gateway. For provider networks in large data centers, quite often, the
   architecture is such a way that multiple gateways are configured per
   subnet. These multiple gateways are typically spread across backplanes
 so
   that the production traffic can be load-balanced between backplanes.
   This is just my use case for supporting multiple gateways, but other
 folks
   might have more use cases as well and also want to take the community's
   opinion about this feature. Is this something that's going to help a
 lot
   of users?
   I want to open up a discussion on this topic and figure out the best
 way to
   handle this.
   1. Should this be done in a same way as dns-nameserver, with a separate
   table with two columns: gateway_ip, subnet_id.
   2. Should Gateway field be converted to a List instead of String?
   I have also opened a bug for Neutron here:
   https://bugs.launchpad.net/neutron/+bug/1464361
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Kevin Benton
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-12 Thread Shraddha Pandhe
Hi everyone,

Any thoughts on supporting multiple gateway IPs for subnets?





On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe 
spandhe.openst...@gmail.com wrote:

 The idea is to round-robin between gateways by using some sort of mod
 operation

 So logically it can look something like:

 idx = len(gateways) % ip
 gateway = gateways[idx]


 This is just one idea. I am open to more ideas.




 On Thu, Jun 11, 2015 at 3:10 PM, Kevin Benton blak...@gmail.com wrote:

 What gateway address do you give to regular clients via dhcp when you
 have multiple?

 On Jun 11, 2015 12:29 PM, Shraddha Pandhe spandhe.openst...@gmail.com
 wrote:
 
  Hi,
  Currently, the Subnets in Neutron and Nova-Network only support one
 gateway. For provider networks in large data centers, quite often, the
 architecture is such a way that multiple gateways are configured per
 subnet. These multiple gateways are typically spread across backplanes so
 that the production traffic can be load-balanced between backplanes.
  This is just my use case for supporting multiple gateways, but other
 folks might have more use cases as well and also want to take the
 community's opinion about this feature. Is this something that's going to
 help a lot of users?
  I want to open up a discussion on this topic and figure out the best
 way to handle this.
  1. Should this be done in a same way as dns-nameserver, with a separate
 table with two columns: gateway_ip, subnet_id.
  2. Should Gateway field be converted to a List instead of String?
  I have also opened a bug  for Neutron here:
 https://bugs.launchpad.net/neutron/+bug/1464361
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-11 Thread Shraddha Pandhe
The idea is to round-robin between gateways by using some sort of mod
operation

So logically it can look something like:

idx = len(gateways) % ip
gateway = gateways[idx]


This is just one idea. I am open to more ideas.




On Thu, Jun 11, 2015 at 3:10 PM, Kevin Benton blak...@gmail.com wrote:

 What gateway address do you give to regular clients via dhcp when you have
 multiple?

 On Jun 11, 2015 12:29 PM, Shraddha Pandhe spandhe.openst...@gmail.com
 wrote:
 
  Hi,
  Currently, the Subnets in Neutron and Nova-Network only support one
 gateway. For provider networks in large data centers, quite often, the
 architecture is such a way that multiple gateways are configured per
 subnet. These multiple gateways are typically spread across backplanes so
 that the production traffic can be load-balanced between backplanes.
  This is just my use case for supporting multiple gateways, but other
 folks might have more use cases as well and also want to take the
 community's opinion about this feature. Is this something that's going to
 help a lot of users?
  I want to open up a discussion on this topic and figure out the best way
 to handle this.
  1. Should this be done in a same way as dns-nameserver, with a separate
 table with two columns: gateway_ip, subnet_id.
  2. Should Gateway field be converted to a List instead of String?
  I have also opened a bug  for Neutron here:
 https://bugs.launchpad.net/neutron/+bug/1464361
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-11 Thread Shraddha Pandhe
Hi,
Currently, the Subnets in Neutron and Nova-Network only support one gateway. 
For provider networks in large data centers, quite often, the architecture is 
such a way that multiple gateways are configured per subnet. These multiple 
gateways are typically spread across backplanes so that the production traffic 
can be load-balanced between backplanes.This is just my use case for supporting 
multiple gateways, but other folks might have more use cases as well and also 
want to take the community's opinion about this feature. Is this something 
that's going to help a lot of users?I want to open up a discussion on this 
topic and figure out the best way to handle this.
1. Should this be done in a same way as dns-nameserver, with a separate table 
with two columns: gateway_ip, subnet_id.2. Should Gateway field be converted to 
a List instead of String?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Networking] Support for multiple gateways in neutron/nova-net subnets for provider networks

2015-06-11 Thread Shraddha Pandhe
Hi,
Currently, the Subnets in Neutron and Nova-Network only support one
gateway. For provider networks in large data centers, quite often, the
architecture is such a way that multiple gateways are configured per
subnet. These multiple gateways are typically spread across backplanes so
that the production traffic can be load-balanced between backplanes.
This is just my use case for supporting multiple gateways, but other folks
might have more use cases as well and also want to take the community's
opinion about this feature. Is this something that's going to help a lot of
users?
I want to open up a discussion on this topic and figure out the best way to
handle this.
1. Should this be done in a same way as dns-nameserver, with a separate
table with two columns: gateway_ip, subnet_id.
2. Should Gateway field be converted to a List instead of String?
I have also opened a bug  for Neutron here:
https://bugs.launchpad.net/neutron/+bug/1464361
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][neutron[dhcp][dnsmask]: duplicate entries in addn_hosts causing no IP allocation

2015-06-09 Thread Shraddha Pandhe
Hi Daniel,
I see following in your command
--dhcp-range=set:tag0,192.168.110.0,static,86400s 
--dhcp-range=set:tag1,192.168.111.0,static,86400s

Is this expected? Was this command generated by the agent itself, or was 
Dnsmasq manually started?



 



 On Tuesday, June 9, 2015 4:41 AM, Kevin Benton blak...@gmail.com wrote:
   

 Just to be sure, I assume we're focussing here on the issue that Daniel 
 reported
Yes.
To be clear, though, what code are you trying to reproduce on?  Current master?
I was trying on 2014.1.3, which is the version I understand to be on Fuel 5.1.1.
I'm not clear whether that would qualify as 'concurrent', in the sense that 
you have in mind.
It doesn't look like it based on the pseudocode. I was thinking of a condition 
where a port is deleted nearly very quickly after it was created. Is that 
possible with your test? If not, then my theory about out-of-order 
notifications might not be any good.
On Tue, Jun 9, 2015 at 3:34 AM, Neil Jerram neil.jer...@metaswitch.com wrote:

On 09/06/15 01:15, Kevin Benton wrote:

I'm having difficulty reproducing the issue. The bug that Neil
referenced (https://bugs.launchpad.net/neutron/+bug/1192381) looks like
it was in Icehouse well before the 2014.1.3 release that looks like Fuel
5.1.1 is using.


Just to be sure, I assume we're focussing here on the issue that Daniel 
reported (IP appears twice in Dnsmasq config), and for which I described a 
possible corollary (Dnsmasq config size keeps growing), and NOT on the Another 
DHCP agent problem that I mentioned below. :-)

BTW, now that I've reviewed the history of when my team saw this, I can say 
that it was actually first reported to us with the 'IP appears twice in Dnsmasq 
config' symptom - i.e. exactly the same as Daniel's case. The fact of the 
Dnsmasq config increasing in size was noticed later.


I tried setting the agent report interval to something higher than the
downtime to make it seem like the agent is failing sporadically to the
server, but it's not impacting the notifications.


Makes sense - that's the effect of the fix for 1192381.

To be clear, though, what code are you trying to reproduce on?  Current master?


Neil, does your testing where you saw something similar have a lot of
concurrent creation/deletion?


It was a test of continuously deleting and creating VMs, with this pseudocode:

thread_pool = new_thread_pool(size=30)
for x in range(0,30):
    thread_pool.submit(create_vm)
thread_pool.wait_for_all_threads_to_complete()
while True:
     time.sleep(5)
     for x in range(0,int(random.random()*5)):
          thread_pool.submit(randomly_delete_a_vm_and_create_a_new_one)

I'm not clear whether that would qualify as 'concurrent', in the sense that you 
have in mind.

Regards,
        Neil


On Mon, Jun 8, 2015 at 12:21 PM, Andrew Woodward awoodw...@mirantis.com
mailto:awoodw...@mirantis.com wrote:

    Daniel,

    This sounds familiar, see if this matches [1]. IIRC, there was
    another issue like this that was might already address this in the
    updates into Fuel 5.1.2 packages repo [2]. You can either update the
    neutron packages from [2] Or try one of community builds for 5.1.2
    [3]. If this doesn't resolve the issue, open a bug against MOS dev [4].

    [1] https://bugs.launchpad.net/bugs/1295715
    [2] http://fuel-repository.mirantis.com/fwm/5.1.2/ubuntu/pool/main/
    [3] https://ci.fuel-infra.org/
    [4] https://bugs.launchpad.net/mos/+filebug

    On Mon, Jun 8, 2015 at 10:15 AM Neil Jerram
    neil.jer...@metaswitch.com mailto:neil.jer...@metaswitch.com wrote:

        Two further thoughts on this:

        1. Another DHCP agent problem that my team noticed is that it
        call_driver('reload_allocations') takes a bit of time (to
        regenerate the
        Dnsmasq config files, and to spawn a shell that sends a HUP
        signal) -
        enough so that if there is a fast steady rate of port-create and
        port-delete notifications coming from the Neutron server, these can
        build up in DHCPAgent's RPC queue, and then they still only get
        dispatched one at a time.  So the queue and the time delay
        become longer
        and longer.

        I have a fix pending for this, which uses an extra thread to
        read those
        notifications off the RPC queue onto an internal queue, and then
        batches
        the call_driver('reload_allocations') processing when there is a
        contiguous sequence of such notifications - i.e. only does the
        config
        regeneration and HUP once, instead of lots of times.

        I don't think this is directly related to what you are seeing - but
        perhaps there actually is some link that I am missing.

        2. There is an interesting and vaguely similar thread currently
        being
        discussed about the L3 agent (subject L3 agent rescheduling
        issue) -
        about possible RPC/threading issues between the agent and the
        Neutron
    

[openstack-dev] Fw: Need advice - changing DB schema (nova-network)

2014-03-06 Thread Shraddha Pandhe
Hi folks,



I am working on nova-network in Havana. I have a very unique use case where I 
need to add duplicate VLANs in nova-network. I am trying to add multiple 
networks in nova-network with same VLAN ID. The reason is as follows:

The cluster that I have has an L3 backplane. We have been given a limitation 
that, per rack, we have a few networks with unique VLAN tags, and the VLAN tags 
repeat in every rack. So now, when I add networks in nova-network, I need to 
add these networks in same VLAN. 


nova-network currently has a unique constraint on (vlan, deleted). So to 
allow for duplicate VLANs in  the DB, I am removing that unique constraint. I 
am modifying the migrate scripts to make sure that UC doesn't apply again on 
db_sync.  I am also modifying the unit tests to reverse their sense (make sure 
that duplicate VLANs are allowed)

After making these changes, I have verified following scenarios:
1. Add networks with duplicate VLANs
2. Update networks with duplicate VLANs
3. db_sync doesn't revert back the constraint.
4. VM comes up properly and I can ping it. 

Since this is a DB schema change, I am a bit skeptical about it, and hence, 
looking for expert advice.

1. How risky is it to make DB schema change?
2. I know that I have to look out for any new migration scripts that touch that 
UC/Index. Anything else that I need to worry about w.r.t migration scripts?
3. Any more scenarios I should be testing?

Thank you in advance!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Need advice - changing DB schema (nova-network)

2014-03-06 Thread Shraddha Pandhe
Hi folks,



I am working on nova-network in Havana. I have a very unique use case where I 
need to add duplicate VLANs in nova-network. I am trying to add multiple 
networks in nova-network with same VLAN ID. The reason is as follows:

The cluster that I have has an L3 backplane. We have been given a limitation 
that, per rack, we have a few networks with unique VLAN tags, and the VLAN tags 
repeat in every rack. So now, when I add networks in nova-network, I need to 
add these networks in same VLAN. 


nova-network currently has a unique constraint on (vlan, deleted). So to 
allow for duplicate VLANs in  the DB, I am removing that unique constraint. I 
am modifying the migrate scripts to make sure that UC doesn't apply again on 
db_sync.  I am also modifying the unit tests to reverse their sense (make sure 
that duplicate VLANs are allowed)

After making these changes, I have verified following scenarios:
1. Add networks with duplicate VLANs
2. Update networks with duplicate VLANs
3. db_sync doesn't revert back the constraint.
4. VM comes up properly and I can ping it. 

Since this is a DB schema change, I am a bit skeptical about it, and hence, 
looking for expert advice.

1. How risky is it to make DB schema change?
2. I know that I have to look out for any new migration scripts that touch that 
UC/Index. Anything else that I need to worry about w.r.t migration scripts?
3. Any more scenarios I should be testing?

Thank you in advance!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev