On 2021/12/04 15:32, Krzysztof Kanas wrote:
> I tried the settings:
> 
> inet 0.0.0.0 255.255.255.255 0.0.0.1 \
> pppoedev em0 authproto chap \
> authname 'testcaller' authkey 'secret' up
> !/sbin/route add default -ifp pppoe0 0.0.0.1
> 
> But that didn't fixed my ICPC negotiation problem.
> 
> If I changed first line to (use bougs remote IP)
> inet 0.0.0.0 255.255.255.255 10.64.64.33 \
> ...
> 
> Then it works around IPCP problem, but now routing becomes problem as 
> !/sbin/route add default -ifp pppoe0 0.0.0.1
> 
> So
> 
> inet 0.0.0.0 255.255.255.255 10.64.64.33 \
> pppoedev em0 authproto chap \
> authname 'testcaller' authkey 'secret' up
> !/sbin/route add default -ifp pppoe0 10.64.64.33
> 
> Seems to be working.
> 
> I will test it more, but does this sound correct ?
> And if so then is it worth while to add this to man page to bugs section 
> ?

This is expected, with the current routing table code, even with -ifp set
to a point-to-point interface, the route can only be added if the route
destination matches the remote address on the interface at the time.

"0.0.0.1" can only be added as a route while the remote address is set
to the 0.0.0.1 'magic' value. So, there's another race condition here,
if the interface comes up quickly and IPCP completes before the route
command is executed, the route won't be added. In practice I haven't hit
this when the whole thing is run from netstart, but when commands are
entered by hand you usually need to check and use the actual address.

And of course if you configure a value other than 0.0.0.1, you need to
use that value in the route instead.

So, while it's a bit annoying, and worse than the older route table code,
I do think it's working as expected i.e. not a bug.

Reply via email to