Re: ipsecctl setting up multiple SAs
On 24 nov 2006, at 22.44, Brian Candler wrote: Is this 60 second timeout a tunable? Or can you point me to where it's defined in the kernel? I'd like to try increasing it. sysctl net.inet.ip.ipsec-invalid-life=60 (If you're curious, look at reserve_spi() in /usr/src/sys/netinet/ ip_ipsp.c) /H
Re: ipsecctl setting up multiple SAs
On Sat, Nov 25, 2006 at 02:29:46PM +, Brian Candler wrote: > So now I need to establish whether those original 1,000 sent packets were > actually arriving at the Cisco or not, which perhaps careful use of > interface counters might reveal, or else I need to dig out a switch with > port mirroring. Interface counters are inconclusive. Making a measurement 3 seconds after I start isakmpd, I get OpenBSD: [netstat -nI rl0, taking the difference] Out: 1161 In: 162 Cisco: [clear counters g0/0; wait 3 secs; show int g0/0] 763 packets input, 124762 bytes, 40 no buffer Received 0 broadcasts, 0 runts, 0 giants, 2 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 0 multicast, 0 pause input 0 input packets with dribble condition detected 161 packets output, 35536 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out So it looks like some packets have been lost on reception, but it's not clear whether all the packets did make it out of the OpenBSD box. However, with "debug crypto isakmp errors" enabled on the Cisco, I get lots of messages like this: *Aug 14 12:20:28.402: ISAKMP:(13124): starving for SPIs... *Aug 14 12:20:28.410: ISAKMP:(13124): starving for SPIs... *Aug 14 12:20:28.418: ISAKMP:(13124): starving for SPIs... *Aug 14 12:20:28.422: ISAKMP:(13124): starving for SPIs... *Aug 14 12:20:28.422: ISAKMP:(13124): starving for SPIs... Google doesn't give any hits for this message. But I guess the Cisco can only allocate SPIs so fast, and that's probably the main thing throttling this. Anyway, the box I'm testing against has to be shipped out on Monday, but if I manage to find out anything more I'll let you know. Regards, Brian.
Re: ipsecctl setting up multiple SAs
> I can think of several possibilities as to why some negotiations are taking > more than 60 seconds. For instance: > > (1) The Cisco 7301 may be slow to respond. It does have a VAM2+ crypto > accelerator installed, but I don't know if it's used for isakmp exchanges, > or just for symmetric encryption/decryption. (However, 'show proc cpu > history' suggests CPU load is no more than about 25%) > > (2) There may be packet loss and retransmissions, maybe due to some network > buffer overflowing, either on OpenBSD or Cisco. > > The OpenBSD box is using a nasty rl0 card, because that's the only spare > interface I had available to go into the test LAN. Having said that, > watching with 'top' I don't see the interrupt load go above 10%. > > I'm not sure how to probe deeper to get a handle on what's actually > happening though. Perhaps isakmpd -L logging might shed some light, although > I don't fancy decoding QM exchanges by hand :-( As a starting point, I tried capturing packets with both isakmpd -L and tcpdump at the same time. This is for a config which creates 1000 SAs. Then I wrote a little Perl script to take the output from tcpdump -r and count the number of outbound packets (10.1.1.6.500 > 10.1.1.1.500) and inbound (reverse) in each second. Here are the counts from tcpdump: (out) (in) 09:47:373 3 09:47:38104343 09:47:39107 107 09:47:45878 28 09:47:46136 136 09:47:4735 35 09:47:54671 20 09:47:55109 109 09:47:5690 90 09:48:05445 13 09:48:0695 95 09:48:0750 50 09:48:081 2 09:48:151 2 09:48:18282 8 09:48:1978 78 09:48:2090 90 09:48:215 5 09:48:242 4 09:48:3398 5 09:48:3449 49 09:48:352 4 09:48:3710 5 09:48:380 1 09:48:450 1 09:48:481 2 09:48:5044 5 09:48:5134 34 09:48:540 2 09:49:031 2 09:49:050 2 09:49:070 3 09:49:180 1 09:49:2710040 09:49:330 1 TOTAL 53641035 And here are the counts from the pcap file generated by isakmpd -L: 09:47:37192 3 09:47:38853 43 09:47:39107 107 09:47:4528 28 09:47:46136 136 09:47:4735 35 09:47:5420 20 09:47:55109 109 09:47:5690 90 09:48:0513 13 09:48:0695 95 09:48:0750 50 09:48:081 2 09:48:151 2 09:48:188 8 09:48:1978 78 09:48:2090 90 09:48:215 5 09:48:242 4 09:48:335 5 09:48:3449 49 09:48:352 4 09:48:3710 5 09:48:380 1 09:48:450 1 09:48:481 2 09:48:505 5 09:48:5134 34 09:48:540 2 09:49:031 2 09:49:050 2 09:49:070 3 09:49:180 1 09:49:2710040 TOTAL 30241034 Now, whilst the number of input packets tally almost exactly, strangely the number of outbound packets recorded by isakmpd -L is much lower than the actual number of packets seen going out by tcpdump. I can only guess that packet retransmits are not being recorded by isakmpd -L? Anyway, looking at the packet capture on the wire, I interpret it as follows: 09:47:373 3 >> main mode exchange (3 out, 3 in) 09:47:38104343 >> 1000 QM requests, plus 43 QM responses triggering 43 QM completions 09:47:39107 107 >> 107 more QM responses triggering 107 QM completions So 150 QM exchanges have completed. Then nothing more happens until: 09:47:45878 28 That's 850 QM retransmits, plus 28 QM responses, triggering 28 QM completions. Looking at this, it seems like either 850 of the first burst of 1000 packets were not delivered, or arrived at the Cisco and could not be handled; or they arrived successfully, but the responses didn't arrive. So now I need to establish whether those original 1,000 sent packets were actually arriving at the Cisco or not, which perhaps careful use of interface counters might reveal, or else I need to dig out a switch with port mirroring. However, I see that isakmpd checks the result of the sendmsg() call, and it wasn't failing. Also netstat -ni on the OpenBSD side shows no IErrs or OErrs (although I don't see counters specifically for buffer overflows). Ah, but show int g0/0 on the Cisco says: ... Input queue: 1/75/17872/12785 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 1000 bits/sec, 1 packets/sec 5 minute output r
Re: ipsecctl setting up multiple SAs
On Fri, Nov 24, 2006 at 05:22:05PM +0100, H?kan Olsson wrote: > 5. the selected SPI (or "larval" SA state) on the local system is > updated with the keying material, timeouts etc - i.e the "real" SA is > finalized > > This continues until all negotiations are complete -- however there > is a limit on how long this "larval" SA lives in the kernel... as you > may guess it's 60 seconds. (The idea being if a negotiation has not > completed in 60 seconds something has probably failed.) > > Since the hosts seems to be a bit slow in running IKE negotiations, > you hit the 60 second limit before all negotiations are complete, all > remaining "larval" SAs are dropped and when isakmpd tries to "update" > them into real SAs this of course fails. ("No such process" approx > means "no SA found" here.) Thank you for that very clear description. Is this 60 second timeout a tunable? Or can you point me to where it's defined in the kernel? I'd like to try increasing it. However, at this stage I don't really understand why setting -D 5=99, which generates copious logs, makes it work. In fact I can get to 3,000 tunnels (6,000 flows) within a couple of minutes with this flag set. Perhaps this extra logging delays the starts of some of the negotations, somehow spreading the workload. (Maybe having a workload spreading option, so that no more than N outstanding exchanges are present at once, would be a useful control anyway) > PS > When I tried between two ~700Mhz P-III machines a while back, setting > up 4096 (or was it 8k) SAs was no problem. Another developer had a > scenario setting up 40960 SAs over loopback on his laptop -- mainly a > test of kernel memory usage, but he did not hit the 60s larval-SA > time limit there either. I can think of several possibilities as to why some negotiations are taking more than 60 seconds. For instance: (1) The Cisco 7301 may be slow to respond. It does have a VAM2+ crypto accelerator installed, but I don't know if it's used for isakmp exchanges, or just for symmetric encryption/decryption. (However, 'show proc cpu history' suggests CPU load is no more than about 25%) (2) There may be packet loss and retransmissions, maybe due to some network buffer overflowing, either on OpenBSD or Cisco. The OpenBSD box is using a nasty rl0 card, because that's the only spare interface I had available to go into the test LAN. Having said that, watching with 'top' I don't see the interrupt load go above 10%. I'm not sure how to probe deeper to get a handle on what's actually happening though. Perhaps isakmpd -L logging might shed some light, although I don't fancy decoding QM exchanges by hand :-( Regards, Brian.
Re: ipsecctl setting up multiple SAs
On 24 nov 2006, at 13.12, Brian Candler wrote: ... Time(s) Num flows --- - 10 606 20 976 30 1286 40 1384 50 1768 60 1946 70 1946 .. And there it stops, never reaching 2000 (in+out). But I find the following in /var/log/messages: Nov 24 11:12:45 gw isakmpd[32720]: pf_key_v2_set_spi: UPDATE: No such process Nov 24 11:12:45 gw last message repeated 26 times 1946 + 27*2 = 2000, so that's where the missing flows have gone. For some reason some of them are not known; maybe some earlier messages to the kernel were silently dropped? This particular part of the IKE implementation works something like the following: 1. start a negotiation (here: start 1000 negotiations simultaneously) 2. each negotiation starts by reserving a SPI value (for every proposal) in the kernel, the SPI is a 32-bit number but has to be unique. If you do debug logging, you'll see a number of "GETSPI" calls here. 3. isakmpd starts to send out IKE packets to the other host(s) 4. the other side responds, and assuming everything is ok one proposal (per negotiation) is selected 5. the selected SPI (or "larval" SA state) on the local system is updated with the keying material, timeouts etc - i.e the "real" SA is finalized This continues until all negotiations are complete -- however there is a limit on how long this "larval" SA lives in the kernel... as you may guess it's 60 seconds. (The idea being if a negotiation has not completed in 60 seconds something has probably failed.) Since the hosts seems to be a bit slow in running IKE negotiations, you hit the 60 second limit before all negotiations are complete, all remaining "larval" SAs are dropped and when isakmpd tries to "update" them into real SAs this of course fails. ("No such process" approx means "no SA found" here.) /H PS When I tried between two ~700Mhz P-III machines a while back, setting up 4096 (or was it 8k) SAs was no problem. Another developer had a scenario setting up 40960 SAs over loopback on his laptop -- mainly a test of kernel memory usage, but he did not hit the 60s larval-SA time limit there either.
Re: ipsecctl setting up multiple SAs
Hans-Joerg Hoexer wrote: > more correct diff: Cool. It occurs to me that the protocol ought to be included as well though: e.g. [IPsec-10.1.1.6:1-10.1.1.1:1701-17] That's because (in theory) you might have one SA for UDP and another SA for TCP. Other possibilities would be: [IPsec-10.1.1.6-10.1.1.1-17] or [IPsec-10.1.1.6:0-10.1.1.1:0-17] # protocol specified but ports not specified [IPsec-10.1.1.6-10.1.1.1] or [IPsec-10.1.1.6:0-10.1.1.1:0-0] # no protocol specified Regards, Brian.
Re: ipsecctl setting up multiple SAs
On Fri, Nov 24, 2006 at 10:22:26AM +, Brian Candler wrote: > To answer my own question: inspired by the output of ipsecctl, I wrote a > perl program (attached) to generate a suitable isakmpd.conf (also attached), > and this appears to work just fine. And now I seem to have hit some sort of scalability problem. Generating 1,000 transport mode SAs, and monitoring them with ipsecctl -s flow | grep wc -l gives the following after isakmpd has been started: Time(s) Num flows --- - 10 606 20 976 30 1286 40 1384 50 1768 60 1946 70 1946 .. And there it stops, never reaching 2000 (in+out). But I find the following in /var/log/messages: Nov 24 11:12:45 gw isakmpd[32720]: pf_key_v2_set_spi: UPDATE: No such process Nov 24 11:12:45 gw last message repeated 26 times 1946 + 27*2 = 2000, so that's where the missing flows have gone. For some reason some of them are not known; maybe some earlier messages to the kernel were silently dropped? A bit more background: * The OpenBSD machine is a HP/Compaq desktop, single 2.8GHz processor, 512MB, rl0 interface * The Cisco is a 7301 with VAM2+ crypto accelerator. It barely breaks a sweat (peak CPU usage around 25% with all these SAs coming in) * Connected via cheap 100M switch I'm using isakmpd.conf as generated by the Perl script posted before, setting up separate SAs for UDP ports 1 to 10999 inclusive. I've also added [General] Exchange-max-time=180 Retransmits=10 at the top. OK, so next I tried # isakmpd -c /etc/isakmpd/isakmpd.conf.1000 -K -4 -v -d -D 5=99 >log.out 2>&1 but that actually made the problem go away - all 2000 flows were set up correctly :-( I think that the extra work of writing debug info slowed it down sufficiently that whatever was overflowing before is not overflowing now. About 16MB of logs were generated. Next I tried less debugging, with # isakmpd -c /etc/isakmpd/isakmpd.conf.1000 -K -4 -v -d -D 5=50 >log.out3 2>&1 With this the number of flows maxed out at 1840. The logs include things like: ... 113517.306433 Sdep 50 pf_key_v2_get_spi: spi: 113517.306449 Sdep 50 856af6c7 ... 113630.081939 Sdep 40 pf_key_v2_convert_id: IPv4 address 10.1.1.6/32 113630.081951 Sdep 40 pf_key_v2_convert_id: IPv4 address 10.1.1.1/32 113630.081966 Sdep 10 pf_key_v2_set_spi: satype 2 dst 10.1.1.6 SPI 0x856af6c7 113630.082048 Default pf_key_v2_set_spi: UPDATE: No such process I can upload this whole log file if anyone wants to see it (~1MB uncompressed). But perhaps an IPSEC guru can suggest some better way to pin this down. Finally, I thought I'd give it a go with 10,000 SAs, which is the sort of scale I wanted to test the Cisco with anyway. # isakmpd -c /etc/isakmpd/isakmpd.conf.1 -K -4 -v -d -D 5=50 >log.out4 2>&1 It takes a few minutes for isakmpd to get going (although the OpenBSD box remains responsive throughout). Its size grows to 133MB, after which it starts to shrink, and then grow again. The number of flows is low: after 5 minutes it shows # ipsecctl -s flow | wc -l 492 # grep "No such process" log.out4 | wc -l 33 After 10 minutes: # ipsecctl -s flow | wc -l 2568 After 20 minutes: # ipsecctl -s flow | wc -l 2992 The machine isn't swapping, and remains responsive although isakmpd is using 100% CPU. But the rate of successful SA setups is much lower than it was with 1,000. Anyway, I think OpenBSD aquits itself pretty well, and I'm not too worried about it being able to set up 10,000 SAs, but with 1,000 SAs I think it would be worth trying to nail down the pf_key UPDATE problem. Regards, Brian Candler.
Re: ipsecctl setting up multiple SAs
more correct diff: Index: ike.c === RCS file: /cvs/src/sbin/ipsecctl/ike.c,v retrieving revision 1.54 diff -u -p -r1.54 ike.c --- ike.c 24 Nov 2006 08:07:18 - 1.54 +++ ike.c 24 Nov 2006 10:46:19 - @@ -38,17 +38,18 @@ static void ike_section_peer(struct ipse static voidike_section_ids(struct ipsec_addr_wrap *, struct ipsec_auth *, FILE *, u_int8_t); static int ike_get_id_type(char *); -static voidike_section_ipsec(struct ipsec_addr_wrap *, struct - ipsec_addr_wrap *, struct ipsec_addr_wrap *, FILE *); +static voidike_section_ipsec(struct ipsec_addr_wrap *, u_int16_t, struct + ipsec_addr_wrap *, u_int16_t, struct ipsec_addr_wrap *, + char *, FILE *); static int ike_section_p1(struct ipsec_addr_wrap *, struct ipsec_transforms *, FILE *, struct ike_auth *, u_int8_t); -static int ike_section_p2(struct ipsec_addr_wrap *, struct - ipsec_addr_wrap *, u_int8_t, u_int8_t, struct +static int ike_section_p2(struct ipsec_addr_wrap *, u_int16_t, struct + ipsec_addr_wrap *, u_int16_t, u_int8_t, u_int8_t, struct ipsec_transforms *, FILE *, u_int8_t); static voidike_section_p2ids(u_int8_t, struct ipsec_addr_wrap *, u_int16_t, struct ipsec_addr_wrap *, u_int16_t, FILE *); -static int ike_connect(u_int8_t, struct ipsec_addr_wrap *, struct - ipsec_addr_wrap *, FILE *); +static int ike_connect(u_int8_t, struct ipsec_addr_wrap *, u_int16_t, + struct ipsec_addr_wrap *, u_int16_t, FILE *); static int ike_gen_config(struct ipsec_rule *, FILE *); static int ike_delete_config(struct ipsec_rule *, FILE *); @@ -174,33 +175,45 @@ ike_get_id_type(char *string) } static void -ike_section_ipsec(struct ipsec_addr_wrap *src, struct ipsec_addr_wrap *dst, -struct ipsec_addr_wrap *peer, FILE *fd) +ike_section_ipsec(struct ipsec_addr_wrap *src, u_int16_t sport, +struct ipsec_addr_wrap *dst, u_int16_t dport, struct ipsec_addr_wrap *peer, +char *tag, FILE *fd) { - fprintf(fd, SET "[IPsec-%s-%s]:Phase=2 force\n", src->name, dst->name); + char*p; + + if (asprintf(&p, "%s:%d-%s:%d", src->name, ntohs(sport), dst->name, + ntohs(dport)) == -1) + err(1, "ike_section_ipsec"); + + fprintf(fd, SET "[IPsec-%s]:Phase=2 force\n", p); if (peer) - fprintf(fd, SET "[IPsec-%s-%s]:ISAKMP-peer=peer-%s force\n", - src->name, dst->name, peer->name); + fprintf(fd, SET "[IPsec-%s]:ISAKMP-peer=peer-%s force\n", p, + peer->name); else fprintf(fd, SET - "[IPsec-%s-%s]:ISAKMP-peer=peer-default force\n", - src->name, dst->name); + "[IPsec-%s]:ISAKMP-peer=peer-default force\n", p); - fprintf(fd, SET "[IPsec-%s-%s]:Configuration=qm-%s-%s force\n", - src->name, dst->name, src->name, dst->name); - fprintf(fd, SET "[IPsec-%s-%s]:Local-ID=lid-%s force\n", src->name, - dst->name, src->name); - fprintf(fd, SET "[IPsec-%s-%s]:Remote-ID=rid-%s force\n", src->name, - dst->name, dst->name); + fprintf(fd, SET "[IPsec-%s]:Configuration=qm-%s force\n", p, p); + fprintf(fd, SET "[IPsec-%s]:Local-ID=lid-%s force\n", p, src->name); + fprintf(fd, SET "[IPsec-%s]:Remote-ID=rid-%s force\n", p, dst->name); + + if (tag) + fprintf(fd, SET "[IPsec-%s]:PF-Tag=%s force\n", p, tag); + + free(p); } static int -ike_section_p2(struct ipsec_addr_wrap *src, struct ipsec_addr_wrap *dst, -u_int8_t satype, u_int8_t tmode, struct ipsec_transforms *qmxfs, FILE *fd, -u_int8_t ike_exch) -{ - char *tag, *exchange_type, *sprefix; +ike_section_p2(struct ipsec_addr_wrap *src, u_int16_t sport, +struct ipsec_addr_wrap *dst, u_int16_t dport, u_int8_t satype, +u_int8_t tmode, struct ipsec_transforms *qmxfs, FILE *fd, u_int8_t ike_exch) +{ + char*p, *tag, *exchange_type, *sprefix; + + if (asprintf(&p, "%s:%d-%s:%d", src->name, ntohs(sport), dst->name, + ntohs(dport)) == -1) + err(1, "ike_section_p2"); switch (ike_exch) { case IKE_QM: @@ -213,10 +226,9 @@ ike_section_p2(struct ipsec_addr_wrap *s return (-1); } - fprintf(fd, SET "[%s-%s-%s]:EXCHANGE_TYPE=%s force\n", - tag, src->name, dst->name, exchange_type); - fprintf(fd, SET "[%s-%s-%s]:Suites=%s-", tag, src->name, - dst->name, sprefix); + fprintf(fd, SET "[%s-%s]:EXCHANGE_TYPE=%s force\n", tag, p, + exchange_type); + fprintf(fd, SET "[%s-%s]:Suites=%s-", tag, p, sprefix); switch (satype) { case IPSEC_ESP: @@ -339,6 +354,8 @@ ike_sectio
Re: ipsecctl setting up multiple SAs
Hi, On Fri, Nov 24, 2006 at 09:45:45AM +, Brian Candler wrote: > I'm trying to set up multiple transport mode SAs between an OpenBSD 4.0 box > and a Cisco 7301 running IOS [ultimate reason is to load test multiple L2TP > over IPSEC tunnels]. > > Each SA is between the same two IP endpoints but specifies a different UDP > port pair. > > I was able to get a single SA up using ipsecctl, after making this small fix: > > --- sbin/ipsecctl/ike.c.origThu Nov 23 22:48:23 2006 > +++ sbin/ipsecctl/ike.c Thu Nov 23 22:48:37 2006 > @@ -526,7 +526,7 @@ > fprintf(fd, SET "[lid-%s]:Port=%d force\n", src->name, > ntohs(sport)); > if (dport) > - fprintf(fd, SET "[rid-%s]:Port=%d force\n", src->name, > + fprintf(fd, SET "[rid-%s]:Port=%d force\n", dst->name, > ntohs(dport)); > } this has been already commited, thanks! Could you please try the diff below? It's just a quick hack but might solve that problem. HJ. Index: ike.c === RCS file: /cvs/src/sbin/ipsecctl/ike.c,v retrieving revision 1.54 diff -u -p -r1.54 ike.c --- ike.c 24 Nov 2006 08:07:18 - 1.54 +++ ike.c 24 Nov 2006 10:28:33 - @@ -38,12 +38,13 @@ static void ike_section_peer(struct ipse static voidike_section_ids(struct ipsec_addr_wrap *, struct ipsec_auth *, FILE *, u_int8_t); static int ike_get_id_type(char *); -static voidike_section_ipsec(struct ipsec_addr_wrap *, struct - ipsec_addr_wrap *, struct ipsec_addr_wrap *, FILE *); +static voidike_section_ipsec(struct ipsec_addr_wrap *, u_int16_t, struct + ipsec_addr_wrap *, u_int16_t, struct ipsec_addr_wrap *, + char *, FILE *); static int ike_section_p1(struct ipsec_addr_wrap *, struct ipsec_transforms *, FILE *, struct ike_auth *, u_int8_t); -static int ike_section_p2(struct ipsec_addr_wrap *, struct - ipsec_addr_wrap *, u_int8_t, u_int8_t, struct +static int ike_section_p2(struct ipsec_addr_wrap *, u_int16_t, struct + ipsec_addr_wrap *, u_int16_t, u_int8_t, u_int8_t, struct ipsec_transforms *, FILE *, u_int8_t); static voidike_section_p2ids(u_int8_t, struct ipsec_addr_wrap *, u_int16_t, struct ipsec_addr_wrap *, u_int16_t, FILE *); @@ -174,33 +175,45 @@ ike_get_id_type(char *string) } static void -ike_section_ipsec(struct ipsec_addr_wrap *src, struct ipsec_addr_wrap *dst, -struct ipsec_addr_wrap *peer, FILE *fd) +ike_section_ipsec(struct ipsec_addr_wrap *src, u_int16_t sport, +struct ipsec_addr_wrap *dst, u_int16_t dport, struct ipsec_addr_wrap *peer, +char *tag, FILE *fd) { - fprintf(fd, SET "[IPsec-%s-%s]:Phase=2 force\n", src->name, dst->name); + char*p; + + if (asprintf(&p, "%s:%d-%s:%d", src->name, ntohs(sport), dst->name, + ntohs(dport)) == -1) + err(1, "ike_section_ipsec"); + + fprintf(fd, SET "[IPsec-%s]:Phase=2 force\n", p); if (peer) - fprintf(fd, SET "[IPsec-%s-%s]:ISAKMP-peer=peer-%s force\n", - src->name, dst->name, peer->name); + fprintf(fd, SET "[IPsec-%s]:ISAKMP-peer=peer-%s force\n", p, + peer->name); else fprintf(fd, SET - "[IPsec-%s-%s]:ISAKMP-peer=peer-default force\n", - src->name, dst->name); + "[IPsec-%s]:ISAKMP-peer=peer-default force\n", p); + + fprintf(fd, SET "[IPsec-%s]:Configuration=qm-%s force\n", p, p); + fprintf(fd, SET "[IPsec-%s]:Local-ID=lid-%s force\n", p, src->name); + fprintf(fd, SET "[IPsec-%s]:Remote-ID=rid-%s force\n", p, dst->name); - fprintf(fd, SET "[IPsec-%s-%s]:Configuration=qm-%s-%s force\n", - src->name, dst->name, src->name, dst->name); - fprintf(fd, SET "[IPsec-%s-%s]:Local-ID=lid-%s force\n", src->name, - dst->name, src->name); - fprintf(fd, SET "[IPsec-%s-%s]:Remote-ID=rid-%s force\n", src->name, - dst->name, dst->name); + if (tag) + fprintf(fd, SET "[IPsec-%s]:PF-Tag=%s force\n", p, tag); + + free(p); } static int -ike_section_p2(struct ipsec_addr_wrap *src, struct ipsec_addr_wrap *dst, -u_int8_t satype, u_int8_t tmode, struct ipsec_transforms *qmxfs, FILE *fd, -u_int8_t ike_exch) +ike_section_p2(struct ipsec_addr_wrap *src, u_int16_t sport, +struct ipsec_addr_wrap *dst, u_int16_t dport, u_int8_t satype, +u_int8_t tmode, struct ipsec_transforms *qmxfs, FILE *fd, u_int8_t ike_exch) { - char *tag, *exchange_type, *sprefix; + char*p, *tag, *exchange_type, *sprefix; + + if (asprintf(&p, "%s:%d-%s:%d", src->name, ntohs(sport), dst->name, + ntohs(dport)) == -1) + err(1, "ike_section_p2");
Re: ipsecctl setting up multiple SAs
On Fri, Nov 24, 2006 at 09:45:45AM +, Brian Candler wrote: > Looking at this, it seems that the last entry in /etc/ipsec.conf has taken > precedence over the others. > > Is there a way to achieve what I'm trying to do, either using ipsecctl, or > manually configuring isakmpd? To answer my own question: inspired by the output of ipsecctl, I wrote a perl program (attached) to generate a suitable isakmpd.conf (also attached), and this appears to work just fine. It would be nice if ipsecctl could do this too. It could easily generate the lid-addr-port and rid-addr-port sections; the only slightly awkward part is having to generate the Connections list, i.e. [phase 2] Connections=IPsec-addr-port-addr-port,IPsec-addr-port-addr-port,... Regards, Brian. [demime 1.01d removed an attachment of type text/x-perl] [Phase 1] 10.1.1.1=peer-10.1.1.1 [peer-10.1.1.1] Phase=1 Address=10.1.1.1 Authentication=mypresharedkey Configuration=mm-10.1.1.1 [mm-10.1.1.1] EXCHANGE_TYPE=ID_PROT Transforms=3DES-MD5-GRP2 [qm-10.1.1.6-10.1.1.1] EXCHANGE_TYPE=QUICK_MODE Suites=QM-ESP-TRP-3DES-MD5-SUITE [Phase 2] Connections=\ IPsec-10.1.1.6-1-10.1.1.1-1701,\ IPsec-10.1.1.6-10001-10.1.1.1-1701,\ IPsec-10.1.1.6-10002-10.1.1.1-1701,\ IPsec-10.1.1.6-10003-10.1.1.1-1701 [IPsec-10.1.1.6-1-10.1.1.1-1701] Phase=2 ISAKMP-peer=peer-10.1.1.1 Configuration=qm-10.1.1.6-10.1.1.1 Local-ID=lid-10.1.1.6-1 Remote-ID=rid-10.1.1.1-1701 [IPsec-10.1.1.6-10001-10.1.1.1-1701] Phase=2 ISAKMP-peer=peer-10.1.1.1 Configuration=qm-10.1.1.6-10.1.1.1 Local-ID=lid-10.1.1.6-10001 Remote-ID=rid-10.1.1.1-1701 [IPsec-10.1.1.6-10002-10.1.1.1-1701] Phase=2 ISAKMP-peer=peer-10.1.1.1 Configuration=qm-10.1.1.6-10.1.1.1 Local-ID=lid-10.1.1.6-10002 Remote-ID=rid-10.1.1.1-1701 [IPsec-10.1.1.6-10003-10.1.1.1-1701] Phase=2 ISAKMP-peer=peer-10.1.1.1 Configuration=qm-10.1.1.6-10.1.1.1 Local-ID=lid-10.1.1.6-10003 Remote-ID=rid-10.1.1.1-1701 [lid-10.1.1.6-1] ID-type=IPV4_ADDR Address=10.1.1.6 Protocol=17 Port=1 [lid-10.1.1.6-10001] ID-type=IPV4_ADDR Address=10.1.1.6 Protocol=17 Port=10001 [lid-10.1.1.6-10002] ID-type=IPV4_ADDR Address=10.1.1.6 Protocol=17 Port=10002 [lid-10.1.1.6-10003] ID-type=IPV4_ADDR Address=10.1.1.6 Protocol=17 Port=10003 [rid-10.1.1.1-1701] ID-type=IPV4_ADDR Address=10.1.1.1 Protocol=17 Port=1701