"Dr. Michael Weller" wrote:

> On Wed, 5 Apr 2000, Christopher W. Curtis wrote:
> 
> > Hello,
> >
> > I'm wondering if I've run across a routing bug or simply my own
> > misunderstanding:

[two default routes, one metric 1, other metric 2, metric 1 dies]

> There is a specific networking list...

Thanks for the clue.  ;-)

> This is a common misunderstanding (also on configuring dedicated routing
> hardware): A static route is a static route is a static route...
> 
> The routing statement just tells where to forward the packets with this
> destination to. The routing of TCP/IP works completely distributed. Each
> host only knows about the next hop and so the route taken between hosts
> might even differ for packets going in the different directions. It has no
> clue about what the destination does or does not with the packets.
> 
> So what you want to achieve can only be done by a routing protocol.

Well, this isn't really true.  I used to have both gateways installed as
a metric 1, and when one when down, the other would take over.  I could
tell the difference by doing a tracepath and seeing different gateways
being used, without any change in my routing tables.

Now, this works for me, but the problem is that which one Linux choses
is done by order of definition (minor nit) and then there is no way to
bring it to the other gateway when I bring it back up, except to delete
the stable one and then re-add it.  I thought that I could do this by
specifying a metric.

If this is not possible, and dynamic routes are better managed by
routed/gated, then can I safely say that the metric value is completely
useless if you're not running these?

> Static routes, however, are static. The fastest route is taken and that's
> it. Maybe the kernel should simply reject any duplicate routes with the
> same destination even with different metrics. If it doesn't, the IMHO only
> sensible reason could be that it will distribute traffic to this
> destination over the routes with the inverse ratio of the metrics.

Well, I don't like that at all because I was using two default routes
with the same metric and it was working more closely to how I wanted it
to.  Just now as automatic as I wanted, and which I thought the metric
would fix.

I don't know much about TCP/IP, but I know that TCP reports 'Host
Unreachable' messages, and I would think that using these it could use a
next route with a higher metric, since it already will choose a
different route with the same metric.  I haven't thought it out much,
but if a route was marked unreachable, it wouldn't seem unreasonable
that Linux intermittently poll this route (once a minute?) to see if a
lower-metric route that went awry came back.  (Perhaps route is wrong,
but host and gateway should be able to work like this.)

> > ipmasqadm and an NCR controller on the Alpha, still looking...) Either
> > way, the machine went down, and Linux would not go to the default router
> > with a metric of 2.  I couldn't find any information regarding this in
> 
> The point is.. how do you think Linux would have learned about the other
> machine going down? Plain TCP/IP has no means to achieve that. A routing
> protocol might have guessed this after getting no response from a router
> daemon on styx for a while.

styx is going to be a firewall, so I don't want many services running on
it.  We're not multi-homed; I'm just trying to debug the crashes but I
don't want to lose access while I'm doing it.  (You might think that
auto-failover would be bad if I'm trying to see if my gateway has
crashed, but it seems that the cause is port-forwarding which I send off
to another host, so other services will die but downloads, etc, will
continue.)

Thanks for the reply,
Christopher
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]

Reply via email to