<
said:
> On Fri, May 03, 2019 at 12:55:54PM -0400, Garrett Wollman wrote:
>> Does anyone have an easy patch to keep mce(4) from trying to use 9k
>> jumbo mbuf clusters? I think I went down this road once before but
>> the fix wasn't as obvious as it is for the I
Does anyone have an easy patch to keep mce(4) from trying to use 9k
jumbo mbuf clusters? I think I went down this road once before but
the fix wasn't as obvious as it is for the Intel drivers. (I assume
the hardware is not so broken that it requires packets to be stored in
contiguous physical
In article <20180729011153.gd2...@funkthat.com> j...@funkthat.com
writes:
>And I know you know the problem is that over time memory is fragmented,
>so if suddenly you need more jumbo frames than you already have, you're
>SOL...
This problem instantly disappears if you preallocate several
In article
r...@ixsystems.com writes:
>I have seen some work in the direction of avoiding larger than page size
>jumbo clusters in 12-CURRENT. Many existing drivers avoid the 9k cluster
>size already. The code for larger cluster sizes in iflib is #ifdef'd out
>so it maxes out at the page size
I'm commissioning a new NFS server with an Intel dual-40G XL710
interface, running 11.1. I have a few other servers with this
adapter, although not running 40G, and they work fine so long as you
disable TSO. This one ... not so much. On the receive side, it gets
about 600 Mbit/s with lots of
< said:
> Pretty sure these problems have been addressed by now, given the amount
> of computers, smart phones, tablets, etc. running with privacy
> extensions enabled.
They've been "fixed" mostly by hiding big networks behind NATs and
leaving them IPv4-only. And in some enterprises by
In article <1497408664.2220.3.ca...@me.com>, rpa...@me.com writes:
>I don't see any reason why we shouldn't have privacy addresses enabled
>by default. In fact, back in 2008 no one voiced their concerns.
Back in 2008 most people hadn't had their networks fall over as a
result of MLD listener
In article you write:
>Eg, I don't see why we need another tool for some of this missing
>"ethtool" functionality; it seems like most of it would naturally fit
>into ifconfig.
>From the end-user perspective, I agree with Drew. Most of this stuff
In article
you write:
># ifconfig -m cxgbe0
>cxgbe0: flags=8943
># ifconfig cxgbe0 mtu 9000
>ifconfig: ioctl SIOCSIFMTU (set mtu): Invalid argument
I believe this device, like
I noticed that a large number -- but by no means all -- of the packets
captured using libpcap on a netmap'ified ixl(4) interface show up as
truncated -- usually by exactly four bytes. They show up in tcpdump
like this:
18:10:05.348735 IP truncated-ip - 4 bytes missing! 128.30.xxx.xxx.443 >
< said:
> i think it was committed to HEAD but never integrated in the
> stable/10.x branch. I wrote the code in jan/feb 2015.
> I think you can simply backport the driver from head.
So it turned out that this was merged -- along with an Intel driver
update that I needed anyway -- to stable/10
I see from various searches that netmap support was added to ixl(4) --
*but* the code isn't there in 10.2. I'd like to be able to use it for
packet capture, because regular BPF on this interface (XL710) isn't
even able to keep up with 2 Gbit/s, never mind 20 Gbit/s. Can anyone
explain what
< said:
>> 2) Stopping jails with virtual network stacks generates warnings from
>> UMA about memory being leaked.
> I'm given to understand that's Known, and presumably Not Quite Trivial
> To Fix. Since I'm not starting/stopping jails repeatedly as a normal
> runtime thing, I'm ignoring it.
I'm a bit new to managing jails, and one of the things I'm finding I
need is a way for jails to have their own private loopback interfaces
-- so that things like sendmail and local DNS resolvers actually work
right without explicit configuration. Is there any way of making this
work short of
On Tue, 28 Apr 2015 23:37:22 -0700, Adrian Chadd adr...@freebsd.org said:
- as you said, like ~ 64k), and allocate that way. That way there's no
fragmentation to worry about - everything's just using a custom slab
allocator for these large allocation sizes.
It's kind of tempting to suggest
On Wed, 29 Apr 2015 09:30:34 +0200, Mark Schouten m...@tuxis.nl said:
I'm not really (or really not) comfortable with hacking and recompiling
stuff. I'd rather not change anything in the kernel. So would it help in
my case to lower my MTU from 9000 to 4000? If I understand correctly,
this
On Tue, 28 Apr 2015 17:06:02 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
There have been email list threads discussing how allocating 9K jumbo
mbufs will fragment the KVM (kernel virtual memory) used for mbuf
cluster allocation and cause grief.
The problem is not KVA fragmentation
Here's the scenario:
1) A small number of (Linux) clients run a large number of processes
(compute jobs) that read large files sequentially out of an NFS
filesystem. Each process is reading from a different file.
2) The clients are behind a network bottleneck.
3) The Linux NFS client will
On Wed, 25 Feb 2015 18:29:45 -0500, Alfred Perlstein alf...@freebsd.org
said:
I think your other suggestions are fine, however the problem is that:
1) they seem complex for an edge case
2) turning them on may tank performance for no good reason if the
heuristic is met but we're not in the
In article
388835013.10159778.1424820357923.javamail.r...@uoguelph.ca,
rmack...@uoguelph.ca writes:
I tend to think that a bias towards doing Getattr/Lookup over Read/Write
may help performance (the old shortest job first principal), I'm not
sure you'll have a big enough queue of outstanding RPCs
So is anyone working on an RFC 7217 (Stable and Opaque IIDs with
SLAAC) implementation for FreeBSD yet?
-GAWollman
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to
In article 201407151034.54681@freebsd.org, j...@freebsd.org writes:
Hmm, I am surprised by the m_pullup() behavior that it doesn't just
notice that the first mbuf with a cluster has the desired data already
and returns without doing anything.
The specification of m_pullup() is that it
I recently put a new server running 9.2 (with a local patches for NFS)
into production, and it's immediately started to fail in an odd way.
Since I pounded this server pretty heavily and never saw the error in
testing, I'm more than a little bit taken aback. We have identical
hardware in
In article
cab2_nwaomptzjb03pdditk2ovqgqk-tyf83jq4ukt9jnza8...@mail.gmail.com,
csforge...@gmail.com writes:
50/27433/0 requests for jumbo clusters denied (4k/9k/16k)
This is going to screw you. You need to make sure that no NIC driver
ever allocates 9k jumbo pages -- unless you are using one of
In article cage5ycpojnenzw+6sn9wyee5ruzpuicke8db8r0zgrjgbj2...@mail.gmail.com,
Peter Wemm quotes some advice about ZFS filesystem vdev layout:
1. Virtual Devices Determine IOPS
IOPS (I/O per second) are mostly a factor of the number of virtual
devices (vdevs) in a zpool. They are not a factor of
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
The patch includes a lot of drc2.patch and drc3.patch, so don't try
and apply it to a patched kernel. Hopefully it will apply cleanly to
vanilla sources.
Tha patch has been minimally tested.
Well, it's taken
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major issues. However, I'm
still waiting for my user
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
Basically, this patch:
- allows setting of the tcp timeout via vfs.nfsd.tcpcachetimeo
(I'd suggest you go down to a few minutes instead of 12hrs)
- allows TCP caching to be disabled by setting
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and NFS can't do much about it.
Unless, of course, NFS is holding on to received mbufs for a longer time.
Well, I have two problems: one is running out of mbufs (caused, we
think, by
In article
cafoybck-m+71ma7w2ixqnrffn55pe6sotgknzm1atahqe5s...@mail.gmail.com,
jfvo...@gmail.com writes:
How large are you configuring your rings Garrett? Maybe if you tried
reducing them?
I'm not configuring them at all. (Well, hmmm, I did limit the number
of queues to 6 (per interface, it
In article 513e3d75.7010...@freebsd.org, an...@freebsd.org writes:
On 11.03.2013 17:05, Garrett Wollman wrote:
Well, I have two problems: one is running out of mbufs (caused, we
think, by ixgbe requiring 9k clusters when it doesn't actually need
them), and one is livelock. Allowing potentially
On Mon, 11 Mar 2013 21:25:45 -0400 (EDT), Rick Macklem rmack...@uoguelph.ca
said:
To be honest, I'd consider seeing a lot of non-empty receive queues
for TCP connections to the NFS server to be an indication that it is
near/at its load limit. (Sure, if you do netstat a lot, you will
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem rmack...@uoguelph.ca
said:
I suspect this indicates that it isn't mutex contention, since the
threads would block waiting for the mutex for that case, I think?
No, because our mutexes are adaptive, so each thread spins for a while
before
On Sat, 9 Mar 2013 11:27:32 -0500 (EST), Rick Macklem rmack...@uoguelph.ca
said:
around the highwater mark basically indicates this is working. If it wasn't
throwing away replies where the receipt has been ack'd at the TCP
level, the cache would grow very large, since they would only be
In article 20795.29370.194678.963...@hergotha.csail.mit.edu, I wrote:
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
I've thought about this. My concern is that the separate thread might
not keep up with the trimming demand. If that occurred, the cache would
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com said:
Yes, in the past the code was in this form, it should work fine Garrett,
just make sure
the 4K pool is large enough.
[Andre Oppermann's patch:]
if (adapter-max_frame_size = 2048)
adapter- rx_mbuf_sz = MCLBYTES;
-
On Fri, 8 Mar 2013 00:31:18 -0800, Jack Vogel jfvo...@gmail.com said:
I am not strongly opposed to trying the 4k mbuf pool for all larger sizes,
Garrett maybe if you would try that on your system and see if that helps
you, I could envision making this a tunable at some point perhaps?
If you
On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann an...@freebsd.org said:
[stuff I wrote deleted]
You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce mutex contention on the
NFS server's replay cache.
Jumbo pages come directly from the kernel_map
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com said:
Yes, in the past the code was in this form, it should work fine Garrett,
just make sure
the 4K pool is large enough.
I take it then that the hardware works in the traditional way, and
just keeps on using buffers until the
On Fri, 8 Mar 2013 19:47:13 -0500 (EST), Rick Macklem rmack...@uoguelph.ca
said:
If reducing the size to 4K doesn't fix the problem, you might want to
consider shrinking the tunable vfs.nfsd.tcphighwater and suffering
the increased CPU overhead (and some increased mutex contention) of
On Fri, 8 Mar 2013 19:47:13 -0500 (EST), Rick Macklem rmack...@uoguelph.ca
said:
The cached replies are copies of the mbuf list done via m_copym().
As such, the clusters in these replies won't be free'd (ref cnt - 0)
until the cache is trimmed (nfsrv_trimcache() gets called after the
TCP
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard. Two of them (so far) are connected to a network
using jumbo frames, with an MTU a little under 9k, so the ixgbe driver
allocates 32,000 9k clusters for its receive rings. I have noticed,
on the machine that is
I'm working on (of all things) a Puppet module to configure NFS
servers, and I'm wondering if anyone expects to implement NFS over
SCTP on FreeBSD.
-GAWollman
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
In article 4a3bf2df.6080...@freebsd.org, Andre writes:
2) in old T/TCP (RFC1644) which we supported in our TCP code the SYN/FIN
combination was a valid one, though not directly intended for SYN/ACK/FIN.
It still is valid, and should be possible to generate using sendmsg()
and MSG_EOF.
In article 41d96b7f-f76d-4f35-ba1d-0edf810e6...@young-alumni.com,
Chris writes:
True OR False
1) NDIS only works with XP drivers.
Can't answer that as I've never needed to try a Vista driver.
2) NDIS only works with 32-bit drivers and wont work on amd64.
False, unless someone has broken it
In article [EMAIL PROTECTED],
[EMAIL PROTECTED] writes:
static int
mpls_attach(struct socket *so)
The prototype for a protocol attach functions is
int (*pru_attach)(struct socket *so, int proto, struct thread *td);
(see sys/protosw.h). You don't have to use these arguments, but
On Fri, 13 Jun 2008 13:04:08 +0200, Kris Kennaway [EMAIL PROTECTED] said:
Garrett Wollman wrote:
Am I the only one who would be happier if openssh were not in the base
system at all?
Quite possibly :)
I don't think it's at all viable to ship FreeBSD without an ssh client
in this day
In article [EMAIL PROTECTED], Brooks
Davis writes:
On Thu, Jun 12, 2008 at 06:30:05PM -0700, Peter Losher wrote:
FYI - HPN is already a build option in the openssh-portable port.
I do think we should strongly consider adding the rest of it to the base.
Am I the only one who would be happier if
In article [EMAIL PROTECTED],
Jeff Davis [EMAIL PROTECTED] wrote:
You should see something like write failed: host is down and the
session will terminate. Of course, when ssh exits, the TCP connection
closes. The only way to see that it's still open and active is by
writing (or using) an
On Sun, 16 Oct 2005 14:06:32 +1000 (EST), Bruce Evans [EMAIL PROTECTED]
said:
Probably the problem is largest for latency, especially in benchmarks.
Latency benchmarks probably have to start cold, so they have no chance
of queue lengths 1, so there must be a context switch per packet and
On Wed, 12 Oct 2005 17:17:12 -0400 (EDT), Andrew Gallatin [EMAIL PROTECTED]
said:
Right now, at least, it seems to work OK. I haven't tried witness,
but a non-debug kernel shows a big speedup from enabling it. Do
you think there is a chance that it could be made to work in FreeBSD?
I did
On Fri, 11 Feb 2005 21:19:16 +0100, Andre Oppermann [EMAIL PROTECTED] said:
Li, Qing wrote:
Ran the packet tests against FreeBSD 5.3 and 6-CURRENT and both
respond to the SYN+FIN packets with SYN+ACK.
This is expected behaviour because of FreeBSD used to implement T/TCP
according to
On Fri, 22 Oct 2004 11:01:30 -0700, Ronald F. Guilmette [EMAIL PROTECTED] said:
Signal numbers are typically represented as ints. Is there anything in
the kernel that prevents me from, say, calling kill(2) with a second
argument of, say, 0xdeadbeef, in other words any old random int value
On Thu, 21 Oct 2004 10:24:08 -0500 (CDT), Mike Silbersack [EMAIL PROTECTED] said:
I think that it would have to be slightly more complex than that for it to
be secure. Instead of using syncookie/RFC1948-like generation,
[...]
HIP! HIP! HIP!!!
-GAWollman
On Thu, 21 Oct 2004 11:51:37 -0700, David O'Brien [EMAIL PROTECTED] said:
I'm not so happy with a FreeBSD-only proprietary thing. Is there any
proposed RFC work that provides the qualities you want? The advantage
with T/TCP is that there was a published standard.
T/TCP was a published
On Tue, 19 Oct 2004 17:19:03 -0700, Ronald F. Guilmette [EMAIL PROTECTED] said:
That's it for now... just aio_connect() and aio_accept(). If I think of
something else, I'll let you know.
[lots of Big Picture(R) stuff elided]
This is certainly an interesting model of program design. However,
On Sun, 17 Oct 2004 13:19:45 -0700, Ronald F. Guilmette [EMAIL PROTECTED] said:
I'm sitting here looking at that man pages for aio_read and aio_write,
and the question occurs to me: ``Home come there is no such thing as
an aio_connect function?''
Mostly because there is no need, since
On Wed, 06 Oct 2004 17:57:14 +0200, Waldemar Kornewald [EMAIL PROTECTED] said:
Yes, something in that direction, plus: protocols:
IPv4, IPv6, TCP, UDP, ICMP, IPX, etc.
Just about everything as modules.
It is not generally regarded as a good idea to make artificial
boundaries between (e.g.) IP
On Fri, 10 Sep 2004 22:05:02 +0200, Andre Oppermann [EMAIL PROTECTED] said:
Brooks Davis wrote:
I'm considering adding an ifconfig -v option that would imply -m and add
more details like index, epoch, dname, dunit, etc.
That would be great!
A particularly relevant feature would give
On Wed, 19 May 2004 09:59:53 +0100, kwl02r [EMAIL PROTECTED] said:
1. Did delay ack time still be detected each 200ms? Which function do
this job? If not, can anybody help to describe some detail things about
delay ack time at freebsd source code.
The TCP timer code has been completely
On Mon, 8 Mar 2004 15:38:04 -0800 (PST), Julian Elischer [EMAIL PROTECTED] said:
I believe that sme of the patches were considerred experimental and
just lacked someone to make them production quality. In other cases they
were not against 'current' and porting them to -curren twas left as an
On Tue, 2 Mar 2004 09:28:25 +, Bruce M Simpson [EMAIL PROTECTED] said:
routed we support largely out of nostalgia, I guess.
Modern routed does more than just RIP; it's responsible for all sorts
of routing-table management tasks that we mostly just pretend don't
exist (e.g., responding to
On Thu, 19 Feb 2004 01:34:34 +0100, Andre Oppermann [EMAIL PROTECTED] said:
- there seems to be no boundary on how many segments we keep in the
tcp reassembly queue
I'm not aware of any TCP implementation which ever had such a
limitation. Perhaps all the others implemented something like
On Wed, 28 Jan 2004 20:49:02 -0500, [EMAIL PROTECTED] said:
Can different MTUs be mixed on the same wire
No.
-GAWollman
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL
On Wed, 07 Jan 2004 23:48:30 +0100, Andre Oppermann [EMAIL PROTECTED] said:
1. Do you think it is neccessary to do a htons() on the randomized
ip_id too? I'd say yes if there is a case where it has to
monotonically increase afterwards. Does it?
IP IDs are nonces. The only
On Mon, 15 Dec 2003 22:17:53 +0100 (CET), Barry Bouwsma [EMAIL PROTECTED] said:
If I were to tweak the sysctl net.inet.ip.intr_queue_maxlen from its
default of 50 up, would that possibly help named?
No, it will not have any effect on your problem. The IP input queue
is only on receive, and
On Tue, 21 Oct 2003 10:00:13 -0400, Mark Allman [EMAIL PROTECTED] said:
Are there any plans to incorporate SACK in FreeBSD?
We plan to add SACK to FreeBSD whan a compatible implementation is
available.
-GAWollman
___
[EMAIL PROTECTED] mailing list
On 30 Sep 2003 18:25:38 +0100, Doug Rabson [EMAIL PROTECTED] said:
The internals of struct device are not contained in sys/bus.h
Unfortunately, the internals of `device_t' are. That's why style(9)
discourages such types.
-GAWollman
___
[EMAIL
On Wed, 27 Aug 2003 11:43:03 -0400 (EDT), Robert Watson [EMAIL PROTECTED] said:
There are a number of situations in which the mbuf allocator is used to
allocate non-mbufs -- for example, we use mbufs to hold IP fragment
queues, as well as some static packet prototype mbufs, socket options,
On Mon, 28 Jul 2003 23:45:28 +0200, Vincent Jardin [EMAIL PROTECTED] said:
I agree, then... Isn't it already the purpose of RTF_CLONING ?
When should RTF_PRCLONIG be set ?
RTF_PRCLONING is set automatically by the protocol to cause host
routes to be generated on every unique lookup.
On Sun, 20 Jul 2003 12:21:59 -0700 (PDT), [EMAIL PROTECTED] (Bill Paul) said:
I don't think you ran out of mbufs (you would have noticed) so that
rules out case #1. Checking cases #2 and #3 requires adding a little
instrumentation to the driver. If the XL_RXSTAT_UP_ERROR bit is being
detected
On Tue, 17 Jun 2003 20:35:23 -0400, [EMAIL PROTECTED] [EMAIL PROTECTED] said:
What is the BSD equivalent of this Linux call:
sock=socket(AF_INET,SOCK_PACKET,htons(ETH_P_RARP));
man libpcap
-GAWollman
___
[EMAIL PROTECTED] mailing list
On Wed, 28 May 2003 17:05:59 +0400 (MSD), Igor Sysoev [EMAIL PROTECTED] said:
always calls tcp_output() when TCP_NOPUSH is turned off. I think
tcp_output() should be called only if data in the send buffer is less
than MSS:
I believe that this is intentional. The application had to
On Wed, 28 May 2003 17:43:56 +0200, Brad du Plessis [EMAIL PROTECTED] said:
Where can I get a list of USB modems supported by BSD
You can't. FreeBSD supports any USB modem that (1) claims in the USB
control protocol to be a modem and (2) doesn't require a firmware
download to make it work. It
On Wed, 28 May 2003 22:22:14 +0400 (MSD), Igor Sysoev [EMAIL PROTECTED] said:
As I understand if the data in the send buffer is bigger than MSS it means
that TCP stack has some reason not to send it and this reason is not
TF_NOPUSH flag. Am I wrong ?
If TCP is for some reason prohibited from
On Mon, 26 May 2003 14:04:19 -0700 (PDT), =?ISO-8859-1?Q?Mikko_Ty=F6l=E4j=E4rvi?=
[EMAIL PROTECTED] said:
A proper BSD port could use something like the trick in Stevens[1] and
keep retrying the call with a larger bufer until the length of the
result is the same as in the previous call.
On Tue, 27 May 2003 13:44:35 -0400, Don Bowman [EMAIL PROTECTED] said:
Actually, a proper BSD port would use the net.route.iflist sysctl
instead.
$ uname -sr
FreeBSD 4.6-RC
$ sysctl net.route
sysctl: unknown oid 'net.route'
Irrelevant. sysctl(8) is not equipped to handle the contents of
On Tue, 4 Mar 2003 04:04:34 +0200, Alexey Zelkin [EMAIL PROTECTED] said:
Wrong.
BZZZT!
As I stated originally, it's impossible to use 'maxsockbuf' value.
That does not change the fact that an unprivileged user can use up to
`maxsockbuf' bytes of wired kernel memory per socket. That's why
On Sat, 1 Mar 2003 15:41:18 +0200, Ruslan Ermilov [EMAIL PROTECTED] said:
Seriously, you didn't give any alternative. How does one
knows the maximum allowed limit? By just blindly trying?
Ask for however much you think you actually need, and bleat to the
administrator (or limp along) if you
On Fri, 28 Feb 2003 13:06:21 +0200, Alexey Zelkin [EMAIL PROTECTED] said:
Working with Sun JDK network code I have realized a need to provide some
range checking wrapper for setsockopt() in SO_{SND,RCV}BUF cases. Short
walk over documentation shown that maximum buffer size is exported via
On Wed, 8 Jan 2003 23:22:22 +0100, Vincent Jardin [EMAIL PROTECTED] said:
Why is rt_refnt decreased so early and not later ?
So long as the route is marked RTF_UP, it cannot be deleted. In a
single-threaded kernel, it is not possible for this code to be
preempted, so there is no means by which
On Wed, 4 Dec 2002 10:31:12 -0800 (PST), randall ehren [EMAIL PROTECTED] said:
root@heat[~]% sysctl -a | grep ipf | grep bridge
net.link.ether.bridge_ipfw: 0
net.link.ether.bridge_ipf: 0
Grrr... Who's responsible for creating non-protocol nodes under
net.link.ether?
-GAWollman
To
On Wed, 13 Nov 2002 08:09:32 +0100, Michael Bretterklieber [EMAIL PROTECTED] said:
My question is do I realy need to fill this? Or is it there just for
future use?
That depends on what you will be using the length for. Some
interfaces require that it be present; other interfaces (e.g., those
On Wed, 16 Oct 2002 00:17:13 +0200, Poul-Henning Kamp [EMAIL PROTECTED] said:
In the meantime absolutely no code has picked up on this idea,
It was copied in spirit from OSF/1.
The side effect of having some source-files using the _IP_VHL hack and
some not is that sizeof(struct ip) varies
On Wed, 16 Oct 2002 00:53:46 +0300, Petri Helenius [EMAIL PROTECTED] said:
My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
Probably means that your outgoing interface queue is filling up.
ENOBUFS is the only way the kernel has to tell you ``slow down!''.
-GAWollman
To
On Wed, 09 Oct 2002 18:18:41 -0700, Lars Eggert [EMAIL PROTECTED] said:
anyone know of an in-kernel traffic generator similar to UDPgen
(http://www.fokus.gmd.de/research/cc/glone/employees/sebastian.zander/private/udpgen/)
for Linux? Userland traffic generators have high overheads with
On Fri, 4 Oct 2002 10:22:53 -0700 (PDT), John Polstra [EMAIL PROTECTED] said:
Accepting incoming T/TCP creates a pretty serious DoS vulnerability,
doesn't it? The very first packet contains the request, which the
server must act upon and reply to without further delay. There is no
3-way
On Wed, 2 Oct 2002 14:26:49 -0400 (EDT), Robert Watson [EMAIL PROTECTED] said:
protocols have the option of implementing pru_sosend() using the central
sosend(), or providing their own optimized implementation. However, the
exception to this appears to be in the nfsclient code, where sosend
[Trying desparately to move this discussion to the correct list]
I spent a few minutes talking to Dave Clark about this question this
afternoon. Here's my paraphrase of his opinion:
- He disclaims completely up-to-date knowledge of the current research
results.
- He feels that 1000 ms is
On Wed, 17 Jul 2002 10:58:12 -0700 (PDT), Bill Baumann [EMAIL PROTECTED] said:
Why bother with a if_softc field when the interface and softc pointer are
supposed to be the same? Also, the very old Lance driver (lnc) has this
problem. It makes me wonder how true we are to TCP/IP
On Sun, 7 Jul 2002 01:37:10 -0700, Alfred Perlstein [EMAIL PROTECTED] said:
Some time ago I noticed that there appeared to be several members
of struct socket that were either only used by listen sockets or
only used by data sockets.
You can't do that. Self-connect is a valid operation on a
On Sat, 08 Jun 2002 23:51:46 -0400, Andy Sparrow [EMAIL PROTECTED] said:
datamib[5] = IFDATA_GENERAL;
*ip = drvdata-ifmd_data.ifi_ipackets;
*op = drvdata-ifmd_data.ifi_opackets;
*ib = drvdata-ifmd_data.ifi_ibytes;
*ob = drvdata-ifmd_data.ifi_obytes;
The ``general'' part of the
On Tue, 04 Jun 2002 00:05:51 +0200, Andre Oppermann [EMAIL PROTECTED] said:
A bug is that host routes created by redirect are never being purged.
But that one has been present for a long (?) time.
You are expected to be running a routing process (like `routed' in
router-discovery mode) which
On Wed, 22 May 2002 17:42:56 -0400 (EDT), John Baldwin [EMAIL PROTECTED] said:
out of the box. Ideally, I would like applications sending packets to the
interface to block when the outgoing queue is full.
No Can Do. The network stack is not prepared to block at all, ever.
-GAWollman
To
Currently, FreeBSD's implementation of RFC 1323 uses the contents of
the `ticks' variable verbatim in the TCP timestamp options that it
generates. This is perhaps undesirable, in that it allows the system
at the other end to determine how long the system has been up.
(Current versions of `nmap'
On Mon, 6 May 2002 17:26:20 -0500 (CDT), Mike Silbersack [EMAIL PROTECTED] said:
Is doing this wise? I have this nagging feeling that randomizing (or
zeroing on each new connection) the timestamp would degrade its usefulness
for PAWS checks and the like. (Don't ask me how, I haven't thought
On Fri, 19 Apr 2002 13:19:42 -0700 (PDT), Julian Elischer [EMAIL PROTECTED] said:
I don't know, but it may have problems setting promiscuous mode..
is there such a thing in vlan mode?
Certainly -- but the other VLANs configured on the same interface have
to be prepared to appropriately ignore
On Wed, 20 Mar 2002 14:18:31 -0600 (CST), Mike Silbersack [EMAIL PROTECTED] said:
We still need to cap the number of sockets somehow, as it would be bad for
sockets to consume all memory.
There's already a cap: maxfiles.
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with
On Wed, 20 Mar 2002 15:01:01 -0600 (CST), Mike Silbersack [EMAIL PROTECTED] said:
That would end up being a reduction below the current value; right now
sockets maxfiles with large maxuser values. Whether or not this is a
necessary differential, I'm not sure. (With TIME_WAIT and FIN_WAIT_2
On Sun, 03 Mar 2002 18:10:36 -0800, George V. Neville-Neil [EMAIL PROTECTED]
said:
This is an issue with the routing system design. Many routers
allow duplicate routes (same netmask) that have different priorities.
This makes it quicker to switch routes during a failure.
FreeBSD permits
1 - 100 of 175 matches
Mail list logo