I wonder if there's anyone here interested in helping me debug a problem
with pty's and pppd.

The background: I am porting rp-l2tp to work with OpenBSD. It's turned into
a bit of a nasty hack, because rp-l2tp uses a Linux-specific terminal
discipline (N_HDLC) and the 'sync' flag to pppd, neither of which OpenBSD
has.

So what I've done is got rp-l2tp to talk asynchronous ppp, which means
adding a CRC and escaping ACCM characters. It's not going to be the most
efficient solution in the world, but at least it doesn't involve changing
the OpenBSD kernel or utilities.

The patch in its current state is at
http://pobox.com/~b.candler/software/rp-l2tp-0.4-obsd-20061206.diff

Now, this is more or less working; I can bring up an L2TP tunnel
successfully, talking to a Cisco IOS box. (In fact I'm testing with L2TP
over IPSEC).

However, there is a problem when large ppp packets come along, which try to
send more than 1024 bytes through the pty. What seems to happen is that the
read() only returns 1024 bytes, and then even if I select() or sleep() no
further bytes are available until some later point in time when another set
of data is written to the pty (such as an LCP keepalive), which seems to
kick the rest of the data through.

What this means in practice is:

(1) If I send a single large ping, the response isn't seen.

# ping -c1 -s1000 10.71.0.1
PING 10.71.0.1 (10.71.0.1): 1000 data bytes
--- 10.71.0.1 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

I've added some debugging in l2tpd, so if you run as "l2tp -f" you see the
following:

# l2tpd -f
read(8192): 27          << LCP/IPCP negotiations
write(32): 32
write(26): 26
read(8192): 34
read(8192): 23
write(44): 44
read(8192): 46
write(20): 20
write(16): 16
write(27): 27
read(8192): 30
read(8192): 29
write(29): 29
read(8192): 28
write(27): 27
read(8192): 1024        << first part of the ping
select(9): 0            << nothing further is available
  << pause >>
read(7169): 183         << rest of the ping plus an incoming LCP echo req
write(1183): 1183       << the ping response (far too late by now)
write(20): 20           << the LCP echo response
read(8192): 23
write(20): 20

(2) If I send a sequence of large pings, they appear to be delayed by 1
second:

# ping -c5 -s1000 10.71.0.1
PING 10.71.0.1 (10.71.0.1): 1000 data bytes
1008 bytes from 10.71.0.1: icmp_seq=0 ttl=255 time=1072.620 ms
1008 bytes from 10.71.0.1: icmp_seq=1 ttl=255 time=1077.472 ms
1008 bytes from 10.71.0.1: icmp_seq=2 ttl=255 time=1060.853 ms
1008 bytes from 10.71.0.1: icmp_seq=3 ttl=255 time=1095.591 ms
--- 10.71.0.1 ping statistics ---
5 packets transmitted, 4 packets received, 20.0% packet loss
round-trip min/avg/max/std-dev = 1060.853/1076.634/1095.591/12.502 ms

The corresponding l2tpd -f output is:

read(8192): 1024        << start of ping 1
select(9): 0
read(7169): 1024        << rest of ping 1 plus start of ping 2
select(9): 0
write(1184): 1184       << response to ping 1
read(7329): 1024        << end ping 2, start ping 3
select(9): 0
write(1185): 1185       << response to ping 2
read(7489): 483         << rest of ping 3
read(8192): 1024        << start of ping 4
select(9): 0
write(1187): 1187       << response to ping 3
read(7169): 1024        << rest of ping 4 plus start of ping 5
select(9): 0
write(1183): 1183       << response to ping 4

(3) If I send pings smaller than ping -s840, then less than 1024 bytes is
transferred through the pty and everything works smoothly.

I've had a look through the kernel source and I see the constant TTYHOG is
1024 bytes, which may or may not be related, but otherwise I don't have
enough understanding about how pty's work to try to find out what's
happening.

I've also tried to replicate this problem in a small program using forkpty,
but not been able to. Maybe it's related specifically to the ppp(4) driver
sending data over a pty.

Anyway, if there's anyone on this list who know is intimate with the
internals of pty(4) and ppp(4), knows enough about rp-l2tp to set up a test
rig, and would like to see the OpenBSD port working, I'd be very grateful
for your assistance.

Many thanks,

Brian Candler.

Reply via email to