Following the little diary of my experiments, here is what i believe I
found :

Because I'm doing traffic shaping at the output of my NIC (HTB with hard
capping), I believe that when there is a lot of outbound traffic in the
StrongSwan tunnel, it also clogs the available bandwidth for the whole
link, and... well, is it the DPD mechanism that can't get through fast
enough anymore, triggering a restart ? That's when bad things happen.

So I guess I should be able to define a specific prio for the packets
responsible for DPD / IKE. For that, I'm using the value ikedscp=101110
in my ipsec.conf, matching with an iptables rule to push the DSCP 0x2e
packets through a priority lane. But am I doing it right ?
Is there a better way ? What could I match to be sure that all the
signalling packets have a higher priority than the data packets ?

Thanks !

    Hoggins!

Le 20/01/2018 à 18:24, Hoggins! a écrit :
> I think I'm getting closer to what I'm looking for.
>
> So this event happens because I have dpdaction=restart. At least that's
> what I found.
> Problem is that with auto=route, if there is any connection drop, the
> tunnel is never reestablished again, hence the dpdaction=restart, which
> was obviously a workaround for me.
>
> So I guess I need to understand why :
>     - when there is excessive latency on the link, the tunnel fails
> (should I work on the replay_window parameter ? The StrongSwan "server"
> audit daemon complains with an "MAC_IPSEC_EVENT op=SA-replayed-pkt")
>     - why, after having failed, the tunnel is not reestablished, why the
> traps are not catching anything
>
>     Hoggins!


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to