Re: LibreSSL 2.0.1 released - installation extra_mode
On OpenBSD, FreeBSD, Debian, and Ubuntu setting a library as executable means you can run it directly, and since ./libressl.so won't work it shouldn't be 755. Ten minutes of research reveals that Red Hat sets the execute bit on all shared libraries, and while its ldd script complains if it's not set, there are numerous unanswered questions from their community as to why this is required. Solaris seems to set the execute bit, but I couldn't find any mention of why (in 10 minutes or less). IF http://unix.stackexchange.com/questions/40587/why-are-shared-libraries-executable is true then HPUX requires it. Perhaps Solaris does something similar? On 64 bit hardware OpenVMS adds the execute bits of mmaped files to the entropy file. In any case, it would seem using 644 is the safest, if not most portable thing to do since 755 might mean something to the OS which LibreSSL doesn't know about. On Mon, Jul 14, 2014 at 1:37 PM, Vadim Zhukov persg...@gmail.com wrote: 2014-07-14 21:30 GMT+02:00 Jan Engelhardt jeng...@inai.de: On Monday 2014-07-14 20:34, Bob Beck wrote: What problem are you trying to solve here. Pristine libtool does not pass -m 644, and default (GNU) install defaults to mode 755 when not specifying anything else. I am trying to figure out why OpenBSD would be patching libtool and adding +case $install_prog in +*[\\\ /]cp\ *) extra_mode=;; +*) extra_mode='-m 644';; +esac (in http://www.openbsd.org/cgi-bin/cvsweb/ports/devel/libtool/patches/patch-libltdl_config_ltmain_sh?rev=1.2;content-type=text/plain ) The real question you should ask is why do other OSes have executable bit set for shared libraries? http://unix.stackexchange.com/questions/40587/why-are-shared-libraries-executable -- WBR, Vadim Zhukov
iked dynamic address configuration
This is a new and improved patch to add dynamic address allocation support to iked. Includes an update to the man page, and a simpler implementation then my previous work. - An address pool is globally defined as pool name cidr block, and referenced from a policy with pool name. (If this patch finally gets traction, I'd like to make setting up a typical roadwarrior vpn is even simpler by adding a feature that creates a pool mirroring the policy's first flow's to range.) - A request for a specific address that is available in the pool will be honoured (ie Win7 static IP) - There is a hard limit of 65536 addresses (8kb) per pool, which should be plenty, and makes the code simpler. - Configuration changes to live address pools work. Allocated addresses stay allocated so long as they fall inside the updated range, but not if they become in range of some other pool. - ikectl reset all will drop all pools. If there is a usecase for it, I can add a reset pool to just drop all pools. Testing has thus far been limited to Windows 7. --Ryan Slack Index: config.c === RCS file: /cvs/src/sbin/iked/config.c,v retrieving revision 1.22 diff -u -p -r1.22 config.c --- config.c 28 Nov 2013 20:28:34 - 1.22 +++ config.c 30 Apr 2014 02:57:33 - @@ -22,6 +22,8 @@ #include sys/socket.h #include sys/uio.h +#include netinet/in.h + #include stdlib.h #include stdio.h #include unistd.h @@ -89,6 +91,10 @@ config_free_sa(struct iked *env, struct if (sa-sa_policy) { (void)RB_REMOVE(iked_sapeers, sa-sa_policy-pol_sapeers, sa); policy_unref(env, sa-sa_policy); + + if(sa-sa_policy-pol_addr_pool[0] != '\0') + addr_pool_release(env,sa-sa_policy-pol_addr_pool, + sa-sa_cfg_peer.cfg.address.addr); } ikev2_msg_flushqueue(env, sa-sa_requests); @@ -376,6 +382,87 @@ config_new_user(struct iked *env, struct return (usr); } +struct iked_addr_pool* +config_new_addr_pool(struct iked *env, struct iked_addr_pool *new) +{ + struct iked_addr_pool *pool,*old; + struct iked_sa *sa; + int i; + struct sockaddr_storage *ss_addr; + struct in_addr *a4; + struct in6_addr *a6; + + if (!(new-pool_size new-pool_size IKED_ADDR_POOL_SIZE_MAX + new-pool_name[0] new-pool_af)) { + log_warnx(%s: required values missing, __func__); + return (NULL); + } + + if ((pool = calloc(1, sizeof(*pool))) == NULL) + return (NULL); + + memcpy(pool, new, sizeof(*pool)); + + if ((pool-pool_used = bit_alloc(pool-pool_size)) == NULL) + goto fail; + bit_set(pool-pool_used, 0); + + if ((old = RB_INSERT(iked_addr_pools, env-sc_pools, pool)) != NULL) { + log_debug(%s: updating address pool '%s', __func__, + pool-pool_name); + + if(old-pool_size) free(old-pool_used); + free(old); + + } else { + log_debug(%s: inserting new address pool '%s', __func__, + pool-pool_name); + } + + RB_FOREACH(sa, iked_sas, env-sc_sas) { + ss_addr = sa-sa_cfg_peer.cfg.address.addr; + if (ss_addr-ss_family != pool-pool_af || + sa-sa_cfg_peer.cfg_action != IKEV2_CP_REPLY || + strcmp(sa-sa_policy-pol_addr_pool, pool-pool_name)) + continue; + + switch (pool-pool_af) { + case AF_INET: + a4 = ((struct sockaddr_in *)ss_addr)-sin_addr; + i = betoh32(a4-s_addr) - betoh32(pool-pool_start[3]); + break; + case AF_INET6: + a6 = ((struct sockaddr_in6*)ss_addr)-sin6_addr; + if(memcmp(a6-s6_addr, pool-pool_start, 12)) i = -1; + else i = betoh32(a6-s6_addr[3]) - + betoh32(pool-pool_start[3]); + break; + default: + log_warnx(%s: invalid address family, __func__); + goto fail; + } + + if (0 = i (uint32_t)i pool-pool_size) { + bit_set(pool-pool_used, i); + } + } + + return pool; + fail: + if(pool-pool_size) free(pool-pool_used); + free(pool); + return (NULL); +} + +int +config_free_addr_pool(struct iked *env, struct iked_addr_pool* pool) +{ + RB_REMOVE(iked_addr_pools, env-sc_pools, pool); + if(pool-pool_size) free(pool-pool_used); + free(pool); + return (0); +} + /* * Inter-process communication of configuration items. */ @@ -442,6 +529,7 @@ config_getreset(struct iked *env, struct struct iked_policy *pol, *nextpol; struct iked_sa *sa, *nextsa; struct iked_user *usr, *nextusr; + struct iked_addr_pool *pool, *nextpool; u_int mode; IMSG_SIZE_CHECK(imsg, mode); @@ -475,6 +563,16 @@ config_getreset(struct iked *env, struct } } + if (mode == RESET_ALL || mode == RESET_ADDR_POOL) { + log_debug(%s: flushing address pools, __func__); + for (pool = RB_MIN(iked_addr_pools, env-sc_pools); + pool != NULL; pool = nextpool) { + nextpool = RB_NEXT(iked_addr_pools, env-sc_pools, + pool); + config_free_addr_pool(env, pool); + } + } + return (0); } @@ -577,6 +675,35 @@ config_getuser(struct iked *env, struct return (-1); print_user(usr); + + return (0); +} + +int +config_setaddrpool(struct iked *env, struct iked_addr_pool *pool, +enum privsep_procid id) +{ + if (env-sc_opts IKED_OPT_NOACTION
Re: udp route-to without to clause
On Mon, Jun 17, 2013 at 3:22 PM, Ryan Slack r...@evine.ca wrote: Hosting a voip server behind OpenBSD with the following pf.conf file led to some surprising behaviour: voice_if = em0 data_if= vr0 ext_if = vr3 PBX = 192.168.234.200 voip_ports = 1:4 table remote_phones persist { } match out on $ext_if from { $voice_if:network, $data_if:network } \ to any nat-to $ext_if static-port pass out allow-opts flags S/SA modulate state pass in proto udp on $ext_if from remote_phones \ port {sip,$voip_ports} rdr-to $PBX Notice the last rule does NOT include a to clause, as seen in the pools faq http://www.openbsd.org/faq/pf/pools.html. The surprise was when udp traffic on ports 1:4 was not coming through and tcdump on $ext_if showed icmp port unreachable being sent back. Adding to $ext_if to the last rule fixed it immediately: pass in proto udp on $ext_if from remote_phones \ to $ext_if port {sip,$voip_ports} rdr-to $PBX If this is by design, please explain! If the to clause is always required with rdr-to, then the man page should be updated, and the parse code throw an error, and perhaps the pools FAQ updated (possibly by me). --Ryan Slack Sigh. As per-usual with pf, problem in chair not in computer (port applied to the from). Sorry for the noise. --Ryan Slack
udp route-to without to clause
Hosting a voip server behind OpenBSD with the following pf.conf file led to some surprising behaviour: voice_if = em0 data_if= vr0 ext_if = vr3 PBX = 192.168.234.200 voip_ports = 1:4 table remote_phones persist { } match out on $ext_if from { $voice_if:network, $data_if:network } \ to any nat-to $ext_if static-port pass out allow-opts flags S/SA modulate state pass in proto udp on $ext_if from remote_phones \ port {sip,$voip_ports} rdr-to $PBX Notice the last rule does NOT include a to clause, as seen in the pools faq http://www.openbsd.org/faq/pf/pools.html. The surprise was when udp traffic on ports 1:4 was not coming through and tcdump on $ext_if showed icmp port unreachable being sent back. Adding to $ext_if to the last rule fixed it immediately: pass in proto udp on $ext_if from remote_phones \ to $ext_if port {sip,$voip_ports} rdr-to $PBX If this is by design, please explain! If the to clause is always required with rdr-to, then the man page should be updated, and the parse code throw an error, and perhaps the pools FAQ updated (possibly by me). --Ryan Slack
[PATCH] iked address pools
The following provides address pools for iked. It's nothing fancy, but it seems to work, at least in the cursory testing done against the Windows 7 client. Each policy gets it's own pool, configured by config address start ip - end ip. There is a hard limit of 65536 addresses (8kb) per pool, which should be plenty. There is NO ipv6 support, partly because I'm not really sure how or why it would be needed. A request for a specific ip that is available in the pool will be honoured. Comments please! --Ryan Slack Index: addr_pool.c === RCS file: addr_pool.c diff -N addr_pool.c --- /dev/null 1 Jan 1970 00:00:00 - +++ addr_pool.c 7 Jun 2013 03:05:46 - @@ -0,0 +1,188 @@ + +#include sys/socket.h + +#include netinet/in.h + +#include stdlib.h +#include string.h +#include errno.h +#include err.h + +#include iked.h + +int +addr_pool_init_mask(struct addr_pool* pool, + struct sockaddr_storage *address, u_int8_t prefixlengh) +{ + struct sockaddr_in *a4; + /* struct sockaddr_in6 *a6; */ + + if(pool == NULL) return(-1); + + memcpy(pool-addr, address, sizeof(pool-addr)); + + switch(address-ss_family){ + case AF_INET: + if (!(16 prefixlengh prefixlengh 30)) { + log_warnx(%s: prefixlengh (%d) out of range [16:30], + __func__,prefixlengh); + return(-1); + } + //pool-mask = prefixlen2mask(prefixlengh); + pool-size = (1(32 - prefixlengh)) - 2; + + a4 = (struct sockaddr_in *)pool-addr; + a4-sin_addr.s_addr = ntohl(a4-sin_addr.s_addr) + 1; + a4-sin_addr.s_addr = htonl(a4-sin_addr.s_addr); + + break; + case AF_INET6: + default: + errno = EAFNOSUPPORT; + log_warn(%s: ,__func__); + return (-1); + } + + return(0); +} + +int +addr_pool_init_range(struct addr_pool* pool, +struct sockaddr_storage *start, +struct sockaddr_storage *end) +{ + struct sockaddr_in *a4, *b4; + //struct sockaddr_in6 *a6, *b6; + uint32_t size_l; + + if(pool == NULL || start == NULL || end == NULL) return(-1); + + if (start-ss_family != end-ss_family) { + log_debug(%s: address family mismatch, __func__); + return (-1); + } + + + switch(start-ss_family){ + case AF_INET: + a4 = (struct sockaddr_in *)start; + b4 = (struct sockaddr_in *)end; + + size_l = ntohl(b4-sin_addr.s_addr) - ntohl(a4-sin_addr.s_addr) + 1; + + if ( size_l ADDR_POOL_SIZE_MAX ) { + log_warnx(%s: size (%d) out of range (%d),__func__, size_l, ADDR_POOL_SIZE_MAX); + return(-1); + } + pool-size = size_l; + + break; + case AF_INET6: + default: + errno = EAFNOSUPPORT; + log_warn(%s: , __func__); + return (-1); + } + + memcpy(pool-addr, start, sizeof(pool-addr)); + + return(0); +} + + +int +addr_pool_alloc(struct addr_pool* pool) +{ + if((pool-pool = bit_alloc(pool-size))==NULL) + return (-1); + + log_debug(%s: %s+%d, __func__, + print_host(pool-addr, NULL, 0), pool-size); + + return(0); +} + +int +addr_pool_free(struct addr_pool* pool){ + if(pool-size) + free(pool-pool); + return (0); +} + +int +addr_pool_reqest(struct addr_pool* pool, struct sockaddr_storage *address) +{ + struct sockaddr_in *a4; + //struct sockaddr_in6 *sa6,*pa6; + u_int32_t h_addr; + int32_t i; + if(pool-addr.ss_family != address-ss_family){ + log_debug(%s: address family mismatch, __func__); + return(-1); + } + switch(pool-addr.ss_family){ + case AF_INET: + a4 = ((struct sockaddr_in *)address); + h_addr = ntohl(((struct sockaddr_in*)pool-addr)-sin_addr.s_addr); + i = ntohl(a4-sin_addr.s_addr) - h_addr; + if (0 = i (uint32_t)i pool-size !bit_test(pool-pool,i)) { + bit_set(pool-pool,i); + } else { + bit_ffc(pool-pool,pool-size,i); + bit_set(pool-pool,i); + a4-sin_addr.s_addr = htonl(h_addr + i); + } + log_debug(%s: assigned %s [%d], __func__, + print_host(address, NULL, 0), i); + + break; + case AF_INET6: + default: + errno = EAFNOSUPPORT; + log_warn(%s: , __func__); + return (-1); + } + + return (0); +} + +int +addr_pool_release(struct addr_pool* pool, struct sockaddr_storage
iked address pools
I wish to submit a working implementation of address pools for iked, however as it's my first real code contribution and has 643 lines (mostly patch context) I'm wondering if posting here is the correct channel. Also, what is the preferred/normal way to include new files in a patch? --Ryan Slack
[PATCH] iked protected-subnet support
Perhaps there was a reason it was never implmented, but in case it just got missed: Index: ikev2.c === RCS file: /cvs/src/sbin/iked/ikev2.c,v retrieving revision 1.82 diff -u -p -r1.82 ikev2.c --- ikev2.c 21 Mar 2013 04:30:14 - 1.82 +++ ikev2.c 25 May 2013 19:49:12 - @@ -1437,6 +1437,17 @@ ikev2_add_cp(struct iked *env, struct ik return (-1); len += 4; break; + case IKEV2_CFG_INTERNAL_IP4_SUBNET: + /* 4 bytes IPv4 address + 4 bytes IPv4 mask + */ + in4 = (struct sockaddr_in *)ikecfg-cfg.address.addr; + mask = prefixlen2mask(ikecfg-cfg.address.addr_mask); + cfg-cfg_length = htobe16(8); + if (ibuf_add(buf, in4-sin_addr.s_addr, 4) == -1) + return (-1); + if (ibuf_add(buf, mask, 4) == -1) + return (-1); + len += 8; + break; case IKEV2_CFG_INTERNAL_IP6_DNS: case IKEV2_CFG_INTERNAL_IP6_NBNS: case IKEV2_CFG_INTERNAL_IP6_DHCP: @@ -1449,6 +1460,7 @@ ikev2_add_cp(struct iked *env, struct ik len += 16; break; case IKEV2_CFG_INTERNAL_IP6_ADDRESS: + case IKEV2_CFG_INTERNAL_IP6_SUBNET: /* 16 bytes IPv6 address + 1 byte prefix length */ in6 = (struct sockaddr_in6 *)ikecfg-cfg.address.addr; cfg-cfg_length = htobe16(17);