Another thing to consider is how long it takes to download into forwarding
hardware.
Forwarding hardware is optimized for forwarding, not programming.
The programming has to wait for time slots when forwarding is not using the
memory.
When you do smart aggregation, a single changed route could c
Good news, bad news.
With an inefficient bash script on an inefficient platform, 120k
processes in less than 15minutes.
Thus far, the best I have is less than 10% reduction with barely
acceptable aggressiveness.
The distribution is too varied or the level of aggressiveness has to be
beyond
> Mark Leonard wrote :
> Your processing time for 5k IPs should be measured in seconds (ie: less than
> one) rather than minutes on any modern core.
I agree. I am surprised by the minutes thing as well.
> Based on your pseudocode (sort -n | uniq) I get the impression that you're
> using BASH w
At this time I am just trying to get an idea if the whole exercise is
worth it. Whether the processing time is feasible for 5k, 50k, 100k,
200k. Whether the results reduce the count measurably at acceptable
collateral levels.
Because rtbh scaling to 100k is one thing. And from there it could g
You could modify a radix tree to include a consolidation function and
resulting confidence. Then walk the nodes of the tree, check to see if the
subtree meets the requirements for consolidation, if so, prune and record
the confidence. You would need to re-run the consolidation from the
original d
So I went back to the drawing board, and I think I have something that
seems to work much better.
- convert input prefixes to single ip expressed as integer
- sort -n | uniq
- into a temporary list file
begin
read sequentially until maxhosts (or minhosts) or next subnet
If matched enough sing
Are you trying to reduce the number of ACL rules that include a known set of
addresses but also minimize covered addresses that are not part of the
mandatory set?
Tony
> On Oct 27, 2019, at 12:29, Joe Maimon wrote:
>
>
>>
>> On Sun, Oct 27, 2019 at 3:09 PM Joe Maimon wrote:
>
>>>
>>
>>
Joe Maimon wrote:
Does anyone have or seen any such tool? I have a script that seems to
work, but its terribly slow.
It's a logic synthesis problem and should be NP hard.
Masataka Ohta
On 10/27/19 4:27 PM, Joe Maimon wrote:
I would be happy to get /29's missing 3 /28's missing 5, etc...
Are you good with rounding up to the next larger network if you have
~62% of the members?
This is not punitive, its about scale.
ACK
--
Grant. . . .
unix || die
smime.p7s
Descripti
> On Sun, Oct 27, 2019 at 3:09 PM Joe Maimon wrote:
>>
>
> your aim is to get to maximum aggregation .. with some overage, like
> 90% of a /24 ?
> so missing like 25 addresses in a whole /24.. (for instance)
I would be happy to get /29's missing 3 /28's missing 5, etc...
This is not punitive,
Is this what you are trying to accomplish?
$ python
Python 2.7.15rc1 (default, Nov 12 2018, 14:31:15)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import netaddr
>>> SomeList=netaddr.IPSet()
>>> SomeList.add('203.0.113.0/25')
>>> SomeList.add('20
On Sun, Oct 27, 2019 at 3:09 PM Joe Maimon wrote:
>
> Does anyone have or seen any such tool? I have a script that seems to
> work, but its terribly slow.
>
> Currently I can produce aggregated subnets that can be mising up to a
> specified number of individual addresses. Which can be fed back in
> On 27. Oct 2019, at 20:36, Joe Maimon wrote:
>
> Not quite.
>
> 203.0.113.1
> 203.0.113.3
> 203.0.113.5
> 203.0.112.6
> 203.0.112.7
>
> Will aggregate to 203.0.113.0/29 if you dont mind the missing 3 addresses
> in the unaggregated list.
>
> Hence, fuzzy aggregation.
Could you describe th
Not quite.
203.0.113.1
203.0.113.3
203.0.113.5
203.0.112.6
203.0.112.7
Will aggregate to 203.0.113.0/29 if you dont mind the missing 3 addresses
in the unaggregated list.
Hence, fuzzy aggregation.
Joe
> Is this what you are trying to accomplish?
>
> $ python
> Python 2.7.15rc1 (default, Nov 12
14 matches
Mail list logo