[quagga-dev 12549] Re: [PATCH 00/10] preparation for nhrpd

2015-05-28 Thread David Lamparter
As they're rather simple & uncontended, I've applied 2,3,5,6,7:

On Fri, May 22, 2015 at 01:40:54PM +0300, Timo Teräs wrote:
> Timo Teräs (10):
>   privs: fix privilege dropping to use system defined groups
>   lib: allow caller to provide prefix storage in sockunion2hostprefix
>   sockunion: add accessors for sockunion address
>   zebra: simplify redistribution code
>   route table: constify some APIs

Not applied (yet):
>   zebra: redistribute hw_type to connected daemons
>   lib: make stream.h lighter to include
>   zebra: make ZEBRA_FLAG_CHANGED internal status
>   zebra: atomic FIB updates
>   zebra: support FIB override routes

Thanks!

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12548] [PATCH] Add code to extract.pl.in to prevent further cli function overwrites

2015-05-28 Thread Donald Sharp
Currently extract.pl.in is used to build the vtysh cli.  When two
different cli's collide with the same command name, the original
cli is never called, because it is dropped.  This code notes the
silent drop and tracks the number of drops.  If they change then
the code will fail the build.

If you have added to the problem, the solution is to fix your cli
command to not stomp on someone else's command.  If you have removed
a stomp, safely modify extract.pl.in as part of your commit.

Signed-off-by: Donald Sharp 
---
 vtysh/extract.pl.in |   25 +
 1 file changed, 25 insertions(+)

diff --git a/vtysh/extract.pl.in b/vtysh/extract.pl.in
index 99d80ed..8385eb5 100755
--- a/vtysh/extract.pl.in
+++ b/vtysh/extract.pl.in
@@ -59,6 +59,8 @@ $ignore{'"terminal monitor"'} = "ignore";
 $ignore{'"terminal no monitor"'} = "ignore";
 $ignore{'"show history"'} = "ignore";
 
+my $cli_stomp = 0;
+
 foreach (@ARGV) {
 $file = $_;
 
@@ -132,6 +134,12 @@ foreach (@ARGV) {
$defun_body = join (", ", @defun_array);
 
# $cmd -> $str hash for lookup
+   if (exists($cmd2str{$cmd})) {
+   warn "Duplicate CLI Function: $cmd\n";
+   warn "\tFrom cli: $cmd2str{$cmd} to New cli: $str\n";
+   warn "\tOriginal Protocol: $cmd2proto{$cmd} to New Protocol: 
$protocol\n";
+  $cli_stomp++;
+   }
$cmd2str{$cmd} = $str;
$cmd2defun{$cmd} = $defun_body;
$cmd2proto{$cmd} = $protocol;
@@ -165,6 +173,23 @@ foreach (@ARGV) {
 }
 }
 
+my $bad_cli_stomps = 78;
+# Currently we have $bad_cli_stomps.  When we have cli commands
+# that map to the same function name, we can introduce subtle
+# bugs due to code not being called when we think it is.
+#
+# If extract.pl fails with a error message and you've been
+# modifying the cli, then go back and fix your code to
+# not have cli command function collisions.
+#
+# If you've removed a cli overwrite, you can safely subtract
+# one from $bad_cli_stomps.  If you've added to the problem
+# please fix your code before submittal
+if ($cli_stomp != $bad_cli_stomps) {
+warn "Expected $bad_cli_stomps command line stomps, but got $cli_stomp 
instead\n";
+exit $cli_stomp;
+}
+
 # Check finaly alive $cmd;
 foreach (keys %odefun) {
 my ($node, $str) = (split (/,/));
-- 
1.7.10.4


___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12547] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Vincent JARDIN

In a small experiment that I'm running now Quagga (zebra,
ospfd, pimd)  uses 9MB of RAM.  if I start adding VRFs I'm going to hit
a wall really quick.


Right, memory will be an issue. Even worst, we have seen that time to 
start many daemons does not scale:


https://lists.quagga.net/pipermail/quagga-dev/2014-December/011933.html

Here, we are targeting +1000.

Best regards,
  Vincent

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12546] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Joachim Nilsson
On 2015-05-28 17:45, Lennart Sorensen wrote:
> On Thu, May 28, 2015 at 03:36:30PM +0100, Paul Jakma wrote:
>> Are we on the same page with that? Or is there disagreement there?
> As someone working on small embedded level boxes without lots of cpu
> cores, the separate process per VRF is not desirable at all.  One process
> with one config interface running in multiple namespaces is ideal for us
> (and in fact what we are currently working on doing).

Amen to that, we¹ need the exact same thing.  Wasting RAM and CPU
on multiple processes is not at all desirable for us that target
limited embedded systems.  To see Quagga moving towards using network
namespaces and a single daemon for this is awesome!

> If someone wants to support multiple processes as well, that's fine,
> as long as it works with a single process too for those that are not
> interested in the multi core scalability and associated overhead of
> multiple processes.

Agreed.  I'm just very happy to see so much activity lately, especially
since we've now got both David and Paul actively working on Quagga
again!  Despite differences and minor hiccups, this is turning out to
be quite a remarkable little community!

Keep up the awesome work guys! <3

Regards
 /Joachim

¹) Westermo

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12545] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Jafar Al-Gharaibeh


On 5/28/2015 11:35 AM, Paul Jakma wrote:

On Thu, 28 May 2015, Lennart Sorensen wrote:

As someone working on small embedded level boxes without lots of cpu 
cores, the separate process per VRF is not desirable at all.  One 
process with one config interface running in multiple namespaces is 
ideal for us (and in fact what we are currently working on doing).


If someone wants to support multiple processes as well, that's fine, 
as long as it works with a single process too for those that are not 
interested in the multi core scalability and associated overhead of 
multiple processes.


Exactly how constrained are you that the kernel process descriptors, 
additional page tables and context switches are an issue?


 But you are doing this for every daemon for every VRF, things add up 
really quick. In a small experiment that I'm running now Quagga (zebra, 
ospfd, pimd)  uses 9MB of RAM.  if I start adding VRFs I'm going to hit 
a wall really quick.



Even parts of the embedded space are going multi-core now.
   Other parts are single-core, both approaches single/multi daemon 
have their best use cases.


Regards,
Jafar

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12542] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Jafar Al-Gharaibeh

Lennart, I agree. I Kinda made the same statements in earlier emails.

I also don't have a strong opinion against moving VRF 
creation/management to a new daemon.
Unlike other protocol daemons that need to be replicated across 
different VRFs creating a scalability concern on

some platforms,  a VRF daemon is only one so the overhead is fixed.

Another point of information/discussion which might affect scalability 
at the network level,
What is the effect of VRFs on protocol messages? take ospfd for example, 
does it need to
send independent protocol packets for every VRF it has? or can some of 
them be combined?
Do we need hello packets per physical router? or per VRF? wht about 
other types of packets? etc...


--Jafar

On 5/28/2015 10:45 AM, Lennart Sorensen wrote:

On Thu, May 28, 2015 at 03:36:30PM +0100, Paul Jakma wrote:

Hi Vincent,

I still have some stuff I need addressed, to be "sold" on.

First off, we need to discus the end-destination. As it may be very
difficult, if not impossible, to agree on the next step if we're not
agreed on the destination.

There seemed to wide agreement that in a perfect world we'd have a
RIB for each VRF/netns, and each instance of a protocol, in a
separate process. Indeed, you might want further separation (e.g.
each AFI/SAFI RIB in a different process, as there's next to no data
sharing between them).

The major obstacle to this is that our config/control code and UI
are currently too directly entwined. That needs fixing one day.

The destination then is to have a collection of processes, that
scales out with VRFs and/or protocol instances, peers, etc. I would
imagine there would be /one/ process there to add/destroy/create
VRFs and associated daemons, and that it'd be separate from the RIB
task in the VRF and other protocols.

Are we on the same page with that? Or is there disagreement there?

As someone working on small embedded level boxes without lots of cpu
cores, the separate process per VRF is not desirable at all.  One process
with one config interface running in multiple namespaces is ideal for us
(and in fact what we are currently working on doing).

If someone wants to support multiple processes as well, that's fine,
as long as it works with a single process too for those that are not
interested in the multi core scalability and associated overhead of
multiple processes.


If on the same page:

The next question how this patch set affects the distance to the
desired state. If we're at A and ultimately want to get to B, then
is this getting us directly there (great!)? Or is it perhaps at a
slight angle from the straight path, a slight detour but still
generally decreasing the distance to B? Or is it at right angles or
greater - increasing the distance?


Now we have to try make this specific, to try get some kind of
objective technical handle on how to assess this:

The first specific issue is whether commands added suit the destination:

I've looked through the code and the commands added look like
if/when we fix the config/control/rendezvous issues, and are able to
try split VRF management from 'zebra' into a VRF management daemon,
then the command definition should still be fine.

So that's good, that's getting us closer to the destination.

(Though, I wonder, why not just /start/ by putting the VRF code in a
dedicated VRF management daemon?)

Background: Note that it is already possible today to run
zebra+protocol daemons one-set-per-VRF/netns. You can write a script
to setup the VRFs and inter-VRF interfaces and it works. For each
VRF you launch zebra and the client daemons for that VRF with a
VRF-specific Zserv path.

The next specific issue is how is this going to interact with client
daemons?

Your patch-set does this by making VRF part of ZServ. Every client
must now become VRF-aware, even if to set the VRF-Id.

However, couldn't this also be accomplished by simply having a
separate ZServ socket for each VRF? The socket for the default VRF
having the (existing) default path? No client mods needed?

Doing it inside the 1 ZServ socket can only serve adding multi-VRF
to client daemons. I will have major concerns about taking such
patches though.

We've not been given a use-case for single-daemon/multi-instance.
Note that protocol-daemon per netns _already_ works today. So what
reason will we have in the future to accept multi-VRF-single-daemon
patches, to take on that level of churn and maintenance needs - when
the use-case will already be covered?

Without multi-VRF-single-daemon, there is no need to have ZServ
support VRF Ids.

Indeed, even with multi-VRF-single-daemon, it might still be better
to have 1 ZServ protocol instance/socket per VRF. It would then be
easier later to split up the zebra side into per-VRF processes.

It makes configuration management from another process much more work.
That's why we aren't doing that.


---

So in summary, the issues I have:

* Why not have VRF setup/management separate from zebra? A single VRF
   manage

[quagga-dev 12544] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Jafar Al-Gharaibeh


On 5/28/2015 11:29 AM, Jafar Al-Gharaibeh wrote:

Lennart, I agree. I Kinda made the same statements in earlier emails.

I also don't have a strong opinion against moving VRF 
creation/management to a new daemon.
Unlike other protocol daemons that need to be replicated across 
different VRFs creating a scalability concern on

some platforms,  a VRF daemon is only one so the overhead is fixed.

Another point of information/discussion which might affect scalability 
at the network level,
What is the effect of VRFs on protocol messages? take ospfd for 
example, does it need to
send independent protocol packets for every VRF it has? or can some of 
them be combined?
Do we need hello packets per physical router? or per VRF? wht about 
other types of packets? etc...


   Just to be clear here, I was just wondering if the single daemon 
approach would have any advantage at the

  network (overhead) level.




--Jafar

On 5/28/2015 10:45 AM, Lennart Sorensen wrote:

On Thu, May 28, 2015 at 03:36:30PM +0100, Paul Jakma wrote:

Hi Vincent,

I still have some stuff I need addressed, to be "sold" on.

First off, we need to discus the end-destination. As it may be very
difficult, if not impossible, to agree on the next step if we're not
agreed on the destination.

There seemed to wide agreement that in a perfect world we'd have a
RIB for each VRF/netns, and each instance of a protocol, in a
separate process. Indeed, you might want further separation (e.g.
each AFI/SAFI RIB in a different process, as there's next to no data
sharing between them).

The major obstacle to this is that our config/control code and UI
are currently too directly entwined. That needs fixing one day.

The destination then is to have a collection of processes, that
scales out with VRFs and/or protocol instances, peers, etc. I would
imagine there would be /one/ process there to add/destroy/create
VRFs and associated daemons, and that it'd be separate from the RIB
task in the VRF and other protocols.

Are we on the same page with that? Or is there disagreement there?

As someone working on small embedded level boxes without lots of cpu
cores, the separate process per VRF is not desirable at all. One process
with one config interface running in multiple namespaces is ideal for us
(and in fact what we are currently working on doing).

If someone wants to support multiple processes as well, that's fine,
as long as it works with a single process too for those that are not
interested in the multi core scalability and associated overhead of
multiple processes.


If on the same page:

The next question how this patch set affects the distance to the
desired state. If we're at A and ultimately want to get to B, then
is this getting us directly there (great!)? Or is it perhaps at a
slight angle from the straight path, a slight detour but still
generally decreasing the distance to B? Or is it at right angles or
greater - increasing the distance?


Now we have to try make this specific, to try get some kind of
objective technical handle on how to assess this:

The first specific issue is whether commands added suit the 
destination:


I've looked through the code and the commands added look like
if/when we fix the config/control/rendezvous issues, and are able to
try split VRF management from 'zebra' into a VRF management daemon,
then the command definition should still be fine.

So that's good, that's getting us closer to the destination.

(Though, I wonder, why not just /start/ by putting the VRF code in a
dedicated VRF management daemon?)

Background: Note that it is already possible today to run
zebra+protocol daemons one-set-per-VRF/netns. You can write a script
to setup the VRFs and inter-VRF interfaces and it works. For each
VRF you launch zebra and the client daemons for that VRF with a
VRF-specific Zserv path.

The next specific issue is how is this going to interact with client
daemons?

Your patch-set does this by making VRF part of ZServ. Every client
must now become VRF-aware, even if to set the VRF-Id.

However, couldn't this also be accomplished by simply having a
separate ZServ socket for each VRF? The socket for the default VRF
having the (existing) default path? No client mods needed?

Doing it inside the 1 ZServ socket can only serve adding multi-VRF
to client daemons. I will have major concerns about taking such
patches though.

We've not been given a use-case for single-daemon/multi-instance.
Note that protocol-daemon per netns _already_ works today. So what
reason will we have in the future to accept multi-VRF-single-daemon
patches, to take on that level of churn and maintenance needs - when
the use-case will already be covered?

Without multi-VRF-single-daemon, there is no need to have ZServ
support VRF Ids.

Indeed, even with multi-VRF-single-daemon, it might still be better
to have 1 ZServ protocol instance/socket per VRF. It would then be
easier later to split up the zebra side into per-VRF processes.

It makes configuration managem

[quagga-dev 12543] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Paul Jakma

On Thu, 28 May 2015, Lennart Sorensen wrote:

As someone working on small embedded level boxes without lots of cpu 
cores, the separate process per VRF is not desirable at all.  One 
process with one config interface running in multiple namespaces is 
ideal for us (and in fact what we are currently working on doing).


If someone wants to support multiple processes as well, that's fine, as 
long as it works with a single process too for those that are not 
interested in the multi core scalability and associated overhead of 
multiple processes.


Exactly how constrained are you that the kernel process descriptors, 
additional page tables and context switches are an issue?


Even parts of the embedded space are going multi-core now.


It makes configuration management from another process much more work.
That's why we aren't doing that.


The destination assumes we fix that.

It's definitely an issue now, agreed. However, does that mean we should 
allow our limitations to corner us?


regards,
--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
"An ounce of prevention is worth a ton of code."
-- an anonymous programmer

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12541] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Lennart Sorensen
On Thu, May 28, 2015 at 03:36:30PM +0100, Paul Jakma wrote:
> Hi Vincent,
> 
> I still have some stuff I need addressed, to be "sold" on.
> 
> First off, we need to discus the end-destination. As it may be very
> difficult, if not impossible, to agree on the next step if we're not
> agreed on the destination.
> 
> There seemed to wide agreement that in a perfect world we'd have a
> RIB for each VRF/netns, and each instance of a protocol, in a
> separate process. Indeed, you might want further separation (e.g.
> each AFI/SAFI RIB in a different process, as there's next to no data
> sharing between them).
> 
> The major obstacle to this is that our config/control code and UI
> are currently too directly entwined. That needs fixing one day.
> 
> The destination then is to have a collection of processes, that
> scales out with VRFs and/or protocol instances, peers, etc. I would
> imagine there would be /one/ process there to add/destroy/create
> VRFs and associated daemons, and that it'd be separate from the RIB
> task in the VRF and other protocols.
> 
> Are we on the same page with that? Or is there disagreement there?

As someone working on small embedded level boxes without lots of cpu
cores, the separate process per VRF is not desirable at all.  One process
with one config interface running in multiple namespaces is ideal for us
(and in fact what we are currently working on doing).

If someone wants to support multiple processes as well, that's fine,
as long as it works with a single process too for those that are not
interested in the multi core scalability and associated overhead of
multiple processes.

> If on the same page:
> 
> The next question how this patch set affects the distance to the
> desired state. If we're at A and ultimately want to get to B, then
> is this getting us directly there (great!)? Or is it perhaps at a
> slight angle from the straight path, a slight detour but still
> generally decreasing the distance to B? Or is it at right angles or
> greater - increasing the distance?
> 
> 
> Now we have to try make this specific, to try get some kind of
> objective technical handle on how to assess this:
> 
> The first specific issue is whether commands added suit the destination:
> 
> I've looked through the code and the commands added look like
> if/when we fix the config/control/rendezvous issues, and are able to
> try split VRF management from 'zebra' into a VRF management daemon,
> then the command definition should still be fine.
> 
> So that's good, that's getting us closer to the destination.
> 
> (Though, I wonder, why not just /start/ by putting the VRF code in a
> dedicated VRF management daemon?)
> 
> Background: Note that it is already possible today to run
> zebra+protocol daemons one-set-per-VRF/netns. You can write a script
> to setup the VRFs and inter-VRF interfaces and it works. For each
> VRF you launch zebra and the client daemons for that VRF with a
> VRF-specific Zserv path.
> 
> The next specific issue is how is this going to interact with client
> daemons?
> 
> Your patch-set does this by making VRF part of ZServ. Every client
> must now become VRF-aware, even if to set the VRF-Id.
> 
> However, couldn't this also be accomplished by simply having a
> separate ZServ socket for each VRF? The socket for the default VRF
> having the (existing) default path? No client mods needed?
> 
> Doing it inside the 1 ZServ socket can only serve adding multi-VRF
> to client daemons. I will have major concerns about taking such
> patches though.
> 
> We've not been given a use-case for single-daemon/multi-instance.
> Note that protocol-daemon per netns _already_ works today. So what
> reason will we have in the future to accept multi-VRF-single-daemon
> patches, to take on that level of churn and maintenance needs - when
> the use-case will already be covered?
> 
> Without multi-VRF-single-daemon, there is no need to have ZServ
> support VRF Ids.
> 
> Indeed, even with multi-VRF-single-daemon, it might still be better
> to have 1 ZServ protocol instance/socket per VRF. It would then be
> easier later to split up the zebra side into per-VRF processes.

It makes configuration management from another process much more work.
That's why we aren't doing that.

> ---
> 
> So in summary, the issues I have:
> 
> * Why not have VRF setup/management separate from zebra? A single VRF
>   management daemon, then dumb (zebra+client daemons)-per VRF (already
>   somewhat possible now with scripts)?

Because of the overhead of extra processes and complexity in managing
the config of those processes.  Some of us are not going to go there,
so if quagga as a project decides to go that way, we may be stuck on
our own private branch forever which would suck.

> * Doing it with one ZServ socket assumes that Quagga will want to support
>   single-process (certainly single-peer) multi-VRF daemons. Why is that
>   latter point attractive when we can already cover the VRF use-case with
>   existing VRF-unaw

[quagga-dev 12540] Re: [PATCH 04/10] lib: make stream.h lighter to include

2015-05-28 Thread Paul Jakma

On Thu, 28 May 2015, Timo Teras wrote:

As alternative, I would then move encoding/decoding of all the 
complicated structures to where is defined. That is stream_put_prefix() 
should go in prefix.h (perhaps renamed to prefix_something). The problem 
is that it's not practical to assume that stream.h depends on all data 
types we want to encode/decode.


You could say prefix shouldn't depend on stream either though.

The data-types stream depends on should be fairly primitive and stable 
though. In quagga, I guess we accept prefix is that.


regards,
--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
Father Fitzpatrick: You left the cyanide capsules next to the Valium, you
old fool. That's just asking for trouble!

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12539] Re: [PATCH 08/10] zebra: make ZEBRA_FLAG_CHANGED internal status

2015-05-28 Thread Paul Jakma

ack

--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
"One's never alone with a rubber duck. "

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12538] Re: [PATCH 07/10] route table: constify some APIs

2015-05-28 Thread Paul Jakma

Ack

--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
What makes the universe so hard to comprehend is that there's nothing
to compare it with.

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12536] Re: [PATCH 04/10] lib: make stream.h lighter to include

2015-05-28 Thread Timo Teras
On Thu, 28 May 2015 15:53:38 +0100 (BST)
Paul Jakma  wrote:

> On Fri, 22 May 2015, Donald Sharp wrote:
> 
> > I would buy the faster build time if the header wasn't actually
> > needed.  To take the argument to it's logical conclusion just
> > define everything you need right in the c code instead of using
> > headers if we are after faster compile time.
> >
> > If there are namespace collisions from stream.h and other system
> > headers they should be solved via configure.ac changes.  Or further
> > refactoring of headers..  If I need a struct from prefix.h I say we
> > should include it.
> 
> Yeah, I'd agree.

It's not possible in some cases. linux-kernel headers being notable
example of this.

And really only "struct prefix *" is needed, so a "struct prefix;" is
the better way to do it.

As  alternative, I would then move encoding/decoding of all the
complicated structures to where is defined. That is stream_put_prefix()
should go in prefix.h (perhaps renamed to prefix_something). The
problem is that it's not practical to assume that stream.h depends on
all data types we want to encode/decode.

But as I'm now using zbuf internally in my code. So I don't "have to"
have this.

/Timo

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12537] Re: [PATCH 06/10] zebra: simplify redistribution code

2015-05-28 Thread Paul Jakma

Ack.

On Fri, 22 May 2015, Timo Teräs wrote:


Merge the conditionals as one to avoid code duplication.

Signed-off-by: Timo Teräs 


regards,
--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
About the only thing we have left that actually discriminates in favor of
the plain people is the stork.___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12535] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Paul Jakma

On Thu, 28 May 2015, Paul Jakma wrote:


daemons, for a use-case that does need the client-daemons to be modified?


s/does need/does not need/

regards,
--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
Don't hit the keys so hard, it hurts.

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12534] Re: [PATCH 04/10] lib: make stream.h lighter to include

2015-05-28 Thread Paul Jakma

On Fri, 22 May 2015, Donald Sharp wrote:


I would buy the faster build time if the header wasn't actually needed.  To
take the argument to it's logical conclusion just define everything you
need right in the c code instead of using headers if we are after faster
compile time.

If there are namespace collisions from stream.h and other system headers
they should be solved via configure.ac changes.  Or further refactoring of
headers..  If I need a struct from prefix.h I say we should include it.


Yeah, I'd agree.

regards,
--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
Never let someone who says it cannot be done interrupt the person who is
doing it.

___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12533] Re: [PATCH 02/10] privs: fix privilege dropping to use system defined groups

2015-05-28 Thread Paul Jakma

On Fri, 22 May 2015, Timo Teräs wrote:


It may be requred for quagga process to belong to additional
groups. E.g. nhrp module will need to talk to strongSwan using
vici and may require additional permissions. Initialize groups
from the system group database.

Signed-off-by: Timo Teräs 


Ack.

regards,
--
Paul Jakma  p...@jakma.org  @pjakma Key ID: 64A2FF6A
Fortune:
Convention is the ruler of all.
-- Pindar___
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
https://lists.quagga.net/mailman/listinfo/quagga-dev


[quagga-dev 12532] Re: [PATCH v3 00/22] VRF support in quagga

2015-05-28 Thread Paul Jakma

Hi Vincent,

I still have some stuff I need addressed, to be "sold" on.

First off, we need to discus the end-destination. As it may be very 
difficult, if not impossible, to agree on the next step if we're not 
agreed on the destination.


There seemed to wide agreement that in a perfect world we'd have a RIB for 
each VRF/netns, and each instance of a protocol, in a separate process. 
Indeed, you might want further separation (e.g. each AFI/SAFI RIB in a 
different process, as there's next to no data sharing between them).


The major obstacle to this is that our config/control code and UI are 
currently too directly entwined. That needs fixing one day.


The destination then is to have a collection of processes, that scales out 
with VRFs and/or protocol instances, peers, etc. I would imagine there 
would be /one/ process there to add/destroy/create VRFs and associated 
daemons, and that it'd be separate from the RIB task in the VRF and other 
protocols.


Are we on the same page with that? Or is there disagreement there?


If on the same page:

The next question how this patch set affects the distance to the desired 
state. If we're at A and ultimately want to get to B, then is this getting 
us directly there (great!)? Or is it perhaps at a slight angle from the 
straight path, a slight detour but still generally decreasing the distance 
to B? Or is it at right angles or greater - increasing the distance?



Now we have to try make this specific, to try get some kind of objective 
technical handle on how to assess this:


The first specific issue is whether commands added suit the destination:

I've looked through the code and the commands added look like if/when we 
fix the config/control/rendezvous issues, and are able to try split VRF 
management from 'zebra' into a VRF management daemon, then the command 
definition should still be fine.


So that's good, that's getting us closer to the destination.

(Though, I wonder, why not just /start/ by putting the VRF code in a 
dedicated VRF management daemon?)


Background: Note that it is already possible today to run zebra+protocol 
daemons one-set-per-VRF/netns. You can write a script to setup the VRFs 
and inter-VRF interfaces and it works. For each VRF you launch zebra and 
the client daemons for that VRF with a VRF-specific Zserv path.


The next specific issue is how is this going to interact with client 
daemons?


Your patch-set does this by making VRF part of ZServ. Every client must 
now become VRF-aware, even if to set the VRF-Id.


However, couldn't this also be accomplished by simply having a separate 
ZServ socket for each VRF? The socket for the default VRF having the 
(existing) default path? No client mods needed?


Doing it inside the 1 ZServ socket can only serve adding multi-VRF to 
client daemons. I will have major concerns about taking such patches 
though.


We've not been given a use-case for single-daemon/multi-instance. Note 
that protocol-daemon per netns _already_ works today. So what reason will 
we have in the future to accept multi-VRF-single-daemon patches, to take 
on that level of churn and maintenance needs - when the use-case will 
already be covered?


Without multi-VRF-single-daemon, there is no need to have ZServ support 
VRF Ids.


Indeed, even with multi-VRF-single-daemon, it might still be better to 
have 1 ZServ protocol instance/socket per VRF. It would then be easier 
later to split up the zebra side into per-VRF processes.


---

So in summary, the issues I have:

* Why not have VRF setup/management separate from zebra? A single VRF
  management daemon, then dumb (zebra+client daemons)-per VRF (already
  somewhat possible now with scripts)?

* Doing it with one ZServ socket assumes that Quagga will want to support
  single-process (certainly single-peer) multi-VRF daemons. Why is that
  latter point attractive when we can already cover the VRF use-case with
  existing VRF-unaware-daemons-per-VRF?

Basically, my problem is that if I sign up to this zebra multi-VRF patch, 
am I signing up for later immense amounts of churn in the routing client 
daemons, for a use-case that does need the client-daemons to be modified?


Note: I'd sign up for a simple VRF management daemon, with VRF-specific 
ZServ paths, in a jiffy.


On Wed, 27 May 2015, Vincent JARDIN wrote:


David, Paul,

Until now, I did prefer to stay quite to avoid confusions.

1- I understand that there is a consensus on this contribution (both 
approaches Single Daemon and Multi Daemon make sense and can coexist),

 2- my review of the code is OK too.

So, I am ready for a merge of this v3 of the patch, unless there is another 
concern on the code.


Best regards,
  Vincent

PS: Sorry, I cannot connect on IRC during dayworks.

On 27/05/2015 10:42, Nicolas Dichtel wrote:

 I also agree.


 Thank you,
 Nicolas

 Le 26/05/2015 22:33, Donald Sharp a écrit :
>  Jafar -
> 
>  Thanks for saying this so eloquently.  This is exactly what we have been

>  discus