ipfw port lookup table patch for review

2008-09-24 Thread Ganbold

Hi,

I thought it might be useful to have port lookup table similar to 
existing IP lookup table

in ipfw and I have made patch for that.

The downside of the patch so far I'm seeing is the port entries are in 
linked list

(no limitation yet, memory overhead), not sorted and it uses linear search
to match (could be slow when lot of entries).

Just after I've made the patch I saw
http://www.freebsd.org/cgi/query-pr.cgi?pr=121807cat= . :(

I agree with PR's reply however for small number of port entries I thought
this functionality is quite useful. It gives benefit like no need to 
modify existing rule,

adding/deleting port entries is easy.

I did some small tests and it seems like working.

Patches are at:
http://people.freebsd.org/~ganbold/ipfw_port_table/

The output of some usage samples is at:
http://people.freebsd.org/~ganbold/ipfw_port_table/ipfw_port_table_usage_sample.txt

Patches can be successfully applied to CURRENT. Didn't test RELENG_7 due to
no RELENG_7  PC :)
Please let me know your thoughts. I'm happy to discuss to improve the patch.
Correct me if I'm doing something wrong here.

thanks,

Ganbold

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ACE on FreeBSD?

2008-09-24 Thread Karim Fodil-Lemelin

Mungyung Ryu wrote:

Hi freeBSD users,

I've developed couple of server applications on Windows platform with ACE
Proactor
and it worked quite well. But, because of the expensive Windows Server,
I wanna move to Linux or freeBSD.

Recently, I'm considering to build a server application on freeBSD but the
important issue
is whether the freeBSD supports ACE Proactor framework.
I googled about it and Linux doesn't support it well because Linux doesn't
support AIO (asynchronous I/O) on socket.
Moreover, most of the ACE professionals recommend to use Reactor framework
on Linux.

My questions is..

1. freeBSD supports AIO on socket?
  

Yes

2. I can use ACE Proactor framework on freeBSD 7.0 without any problem? Is
it stable?
  
It works, although the same Linux recommendation stands for FreeBSD. 
Reactor had much more love then the Proactor on Unices.

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ACE on FreeBSD?

2008-09-24 Thread Bruce M. Simpson

Hi,

I looked at ACE years and years ago (~1997) when Doug Schmidt was first 
promoting the ideas behind it. The whole Reactor/Proactor split pretty 
much hangs on the event dispatch which your particular OS supports.


The key observation is whether your target OS implements events in an 
edge-triggered or level-triggered way; I am borrowing definitions from 
electronic engineering here.


You could do a straight port with Proactor, but performance will 
probably suck, because both FreeBSD (and Linux, I believe) need to 
emulate POSIX asynchronous I/O operations.


Reactor will generally fare better on UNIX derived systems such as 
FreeBSD and Linux, because its event handling primitives are geared 
towards the level-triggered facilities provided by select().


In Windows, Winsock events use asynchronous notifications which may be 
tied to Win32 EVENT objects, and the usual Kernel32.DLL thread 
primitives are used around this. This makes Proactor more appropriate in 
that environment.


XORP does some similar stuff to ACE under the hood to support the native 
socket facilities of both Windows and FreeBSD/Linux. It's hybridized but 
it behaves more like Reactor because we run in a single thread, and you 
have to force Winsock's helper thread to run, by preempting you, using 
some file handle and socket tricks.


I don't currently know about stability of ACE on FreeBSD.

cheers
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Proposed patch, convert IFQ_MAXLEN to kernel tunable...

2008-09-24 Thread Bruce M. Simpson

Hi,

I agree with the intent of the change that IPv4 and IPv6 input queues 
should have a tunable queue length. However, the change provided is 
going to make the definition of IFQ_MAXLEN global and dependent upon a 
variable.


[EMAIL PROTECTED] wrote:

Hi,

It turns out that the last time anyone looked at this constant was
before 1994 and it's very likely time to turn it into a kernel
tunable.  On hosts that have a high rate of packet transmission
packets can be dropped at the interface queue because this value is
too small.  Rather than make a sweeping code change I propose the
following change to the macro and updating a couple of places in the
IP and IPv6 stacks that were using this macro to set their own global
variables.
  


This isn't appropriate for many uses of ifq's which might be internal to 
a given driver or subsystem, and which may use IFQ_MAXLEN for 
convenience, as Ruslan has pointed out. I have code elsewhere which does 
this.


Can you please do this on a per-protocol stack basis? i.e. give IPv4 and 
IPv6 their own TUNABLE queue length.


thanks
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Proposed patch, convert IFQ_MAXLEN to kernel tunable...

2008-09-24 Thread Bruce M. Simpson

[EMAIL PROTECTED] wrote:

...
I found no occurrences of the above in our code base.  I used cscope
to search all of src/sys.  Are you aware of any occurrences of this?
  


I have been using IFQ_MAXLEN to size buffer queues internal to some 
IGMPv3 stuff.


I don't feel comfortable with a change which sizes the queues for both 
IPv4 and IPv6 stacks, from a variable which is obscured by a macro.


thanks
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: lost routes

2008-09-24 Thread Bruce M. Simpson

Giulio Ferro wrote:
 
There are no messages in the logs, and no interface has been

touched. Anyway, since there are a lot of routes and only one
gets deleted I don't think it depends on interface changing
(it would delete them all, wouldn't it?)


Normally static routes only get touched if the state of the underlying 
ifp/ifa changes. There are paths in netinet which will cause routes to 
be deleted in this situation.


Occasionally the idea of a floating static re-surfaces... look in the PR 
database with this term for possibly related reports.


cheers
BMS

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Proposed patch, convert IFQ_MAXLEN to kernel tunable...

2008-09-24 Thread gnn
At Wed, 24 Sep 2008 15:50:32 +0100,
Bruce M. Simpson wrote:
 
 Hi,
 
 I agree with the intent of the change that IPv4 and IPv6 input queues 
 should have a tunable queue length. However, the change provided is 
 going to make the definition of IFQ_MAXLEN global and dependent upon a 
 variable.
 
 [EMAIL PROTECTED] wrote:
  Hi,
 
  It turns out that the last time anyone looked at this constant was
  before 1994 and it's very likely time to turn it into a kernel
  tunable.  On hosts that have a high rate of packet transmission
  packets can be dropped at the interface queue because this value is
  too small.  Rather than make a sweeping code change I propose the
  following change to the macro and updating a couple of places in the
  IP and IPv6 stacks that were using this macro to set their own global
  variables.

 
 This isn't appropriate for many uses of ifq's which might be internal to 
 a given driver or subsystem, and which may use IFQ_MAXLEN for 
 convenience, as Ruslan has pointed out. I have code elsewhere which does 
 this.
 
 Can you please do this on a per-protocol stack basis? i.e. give IPv4 and 
 IPv6 their own TUNABLE queue length.
 

Actually what we'd need is N of these, since my target is actually the
send queue, not the input queue.  Let me look at this some more.

Best,
George
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Proposed patch, convert IFQ_MAXLEN to kernel tunable...

2008-09-24 Thread John-Mark Gurney
George V. Neville-Neil wrote this message on Tue, Sep 23, 2008 at 15:29 -0400:
 It turns out that the last time anyone looked at this constant was
 before 1994 and it's very likely time to turn it into a kernel
 tunable.  On hosts that have a high rate of packet transmission
 packets can be dropped at the interface queue because this value is
 too small.  Rather than make a sweeping code change I propose the
 following change to the macro and updating a couple of places in the
 IP and IPv6 stacks that were using this macro to set their own global
 variables.

The better solution is to resurrect rwatson's patch that eliminates the
interface queue, and does direct dispatch to the ethernet driver..
Usually the driver has a queue of 512 or more packets already, so putting
them into a second queue doesn't provide much benefit besides increasing
the amount of locking necessary to deliver packets...

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 All that I will do, has been done, All that I have, has not.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ACE on FreeBSD?

2008-09-24 Thread Julian Elischer

Bruce M. Simpson wrote:

Hi,

I looked at ACE years and years ago (~1997) when Doug Schmidt was first 
promoting the ideas behind it. The whole Reactor/Proactor split pretty 
much hangs on the event dispatch which your particular OS supports.


The key observation is whether your target OS implements events in an 
edge-triggered or level-triggered way; I am borrowing definitions from 
electronic engineering here.


You could do a straight port with Proactor, but performance will 
probably suck, because both FreeBSD (and Linux, I believe) need to 
emulate POSIX asynchronous I/O operations.


Reactor will generally fare better on UNIX derived systems such as 
FreeBSD and Linux, because its event handling primitives are geared 
towards the level-triggered facilities provided by select().


A true FreeBSD port would use kevent with AIO.
At Cisco/ironport we use AIO with the build in kevent trigering to
great effect. Certainly for sockets it works VERY well.



In Windows, Winsock events use asynchronous notifications which may be 
tied to Win32 EVENT objects, and the usual Kernel32.DLL thread 
primitives are used around this. This makes Proactor more appropriate in 
that environment.


XORP does some similar stuff to ACE under the hood to support the native 
socket facilities of both Windows and FreeBSD/Linux. It's hybridized but 
it behaves more like Reactor because we run in a single thread, and you 
have to force Winsock's helper thread to run, by preempting you, using 
some file handle and socket tricks.


I don't currently know about stability of ACE on FreeBSD.

cheers
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ACE on FreeBSD?

2008-09-24 Thread Julian Elischer

Julian Elischer wrote:

Bruce M. Simpson wrote:

Hi,

I looked at ACE years and years ago (~1997) when Doug Schmidt was 
first promoting the ideas behind it. The whole Reactor/Proactor split 
pretty much hangs on the event dispatch which your particular OS 
supports.


The key observation is whether your target OS implements events in an 
edge-triggered or level-triggered way; I am borrowing definitions from 
electronic engineering here.


You could do a straight port with Proactor, but performance will 
probably suck, because both FreeBSD (and Linux, I believe) need to 
emulate POSIX asynchronous I/O operations.


Reactor will generally fare better on UNIX derived systems such as 
FreeBSD and Linux, because its event handling primitives are geared 
towards the level-triggered facilities provided by select().


A true FreeBSD port would use kevent with AIO.
At Cisco/ironport we use AIO with the build in kevent trigering to
great effect. Certainly for sockets it works VERY well.


sorry I meant for sockets and raw devices.. (what we use them for)
sockets don't need AIO to work well with kevent but raw devices do.
Luckily it works as advertised.






In Windows, Winsock events use asynchronous notifications which may be 
tied to Win32 EVENT objects, and the usual Kernel32.DLL thread 
primitives are used around this. This makes Proactor more appropriate 
in that environment.


XORP does some similar stuff to ACE under the hood to support the 
native socket facilities of both Windows and FreeBSD/Linux. It's 
hybridized but it behaves more like Reactor because we run in a single 
thread, and you have to force Winsock's helper thread to run, by 
preempting you, using some file handle and socket tricks.


I don't currently know about stability of ACE on FreeBSD.

cheers
BMS
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [X-POST] Anyone porting NetworkManager to FreeBSD ?

2008-09-24 Thread Debarshi Ray
 I was thinking about porting it, because I really need this thing on
 my laptop and to have some programming experience. I just wanted to
 have a companion, because I'm not sure I can handle this by myself and
 because I'm pretty lazy these days, so I need to feel responsibility

Myself and Ashish are working on a library (libroute) that basically
abstracts out the various interfaces offered by different kernels to
interact with routing tables, etc.. Currently NetworkManager uses
libnl [1], which is a wrapper over the Linux kernel's PF_NETLINK
socket interface, but its entirely Linux specific. So libroute will
have a backend for PF_NETLINK (using libnl), one for PF_ROUTE, and so
on.

Once we have the required functionality, we intend to modify
NetworkManager so that it calls libroute instead of libnl.

The initial code is available here:
git://bombadil.infradead.org/~rishi/inetutils.git (see libroute/ and
route/) I must say that libroute is still in the initial stages of
developement. :-)

Interested?

Happy hacking,
Debarshi
[1] http://people.suug.ch/~tgr/libnl
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [X-POST] Anyone porting NetworkManager to FreeBSD ?

2008-09-24 Thread Debarshi Ray
 Yep, I'm interested. :)

Awesome. Clone the Git tree start hacking.

Ashish is working on a patch to add IPv6 support to BSD's show
function (see bsd_show.c), while I am reworking the Linux backend to
use libnl instead of mucking with PF_NETLINK directly.

The immediate TODO items are to implement 'add' and 'delete' support for BSD.

Happy hacking,
Debarshi
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Proposed patch, convert IFQ_MAXLEN to kernel tunable...

2008-09-24 Thread gnn
At Wed, 24 Sep 2008 12:53:31 -0700,
John-Mark Gurney wrote:
 
 George V. Neville-Neil wrote this message on Tue, Sep 23, 2008 at 15:29 -0400:
  It turns out that the last time anyone looked at this constant was
  before 1994 and it's very likely time to turn it into a kernel
  tunable.  On hosts that have a high rate of packet transmission
  packets can be dropped at the interface queue because this value is
  too small.  Rather than make a sweeping code change I propose the
  following change to the macro and updating a couple of places in the
  IP and IPv6 stacks that were using this macro to set their own global
  variables.
 
 The better solution is to resurrect rwatson's patch that eliminates the
 interface queue, and does direct dispatch to the ethernet driver..
 Usually the driver has a queue of 512 or more packets already, so putting
 them into a second queue doesn't provide much benefit besides increasing
 the amount of locking necessary to deliver packets...

Actually I am making this change because I found on 10G hardware the
queue is too small.  Also, there are many systems where you might want
to up this, usually ones that are highly biased towards transmit only,
like a multicast repeater of some sort.

Best,
George
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: kern/126742: [panic] kernel panic when sending file via ng_ubt(4)

2008-09-24 Thread emax
Synopsis: [panic] kernel panic when sending file via ng_ubt(4)

Responsible-Changed-From-To: freebsd-net-emax
Responsible-Changed-By: emax
Responsible-Changed-When: Wed Sep 24 23:45:40 UTC 2008
Responsible-Changed-Why: 
over to me

http://www.freebsd.org/cgi/query-pr.cgi?pr=126742
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Question regarding NFS

2008-09-24 Thread Alfred Perlstein
* Adam Stylinski [EMAIL PROTECTED] [080918 17:15] wrote:
 Hello,
   I am running an IPCop firewall for my entire network.  I have a
 wireless network device on the blue subnet which must access a freebsd NFS
 server.  In order to do this, I need to open a DMZ pinhole on a few select
 ports.  It's my understanding that NFS chooses random ports and I was
 wondering if there was a way I could fix this.  There is a good reason that
 the subnet for the wireless is separate from the wired and I'd rather not
 configure this thing over a VPN.  The client connecting to the NFS server is
 a voyage computer (pretty much a small debian).  Also, if at all possible,
 I'd like to keep performance reasonably high when large volumes of clients
 are connecting to the NFS server, I'm not sure if binding to one port may or
 may not make this impossible.  I apologize for my stupidity and lack of
 understanding when it comes to NFS.  Any help would be gladly appreciated,
 guys.

_usually_ NFS uses port 2049 on the server side.  I think the client may
bind to a random low port, this would be annoying to change, but could
be done with a kernel hack relatively easily.  Look at the code in
src/sys/nfsclient/nfs_socket.c, there's some code that that deals with
binding sockets that you can play with.

-- 
- Alfred Perlstein
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Proposed patch, convert IFQ_MAXLEN to kernel tunable...

2008-09-24 Thread Julian Elischer

[EMAIL PROTECTED] wrote:

At Wed, 24 Sep 2008 12:53:31 -0700,
John-Mark Gurney wrote:

George V. Neville-Neil wrote this message on Tue, Sep 23, 2008 at 15:29 -0400:

It turns out that the last time anyone looked at this constant was
before 1994 and it's very likely time to turn it into a kernel
tunable.  On hosts that have a high rate of packet transmission
packets can be dropped at the interface queue because this value is
too small.  Rather than make a sweeping code change I propose the
following change to the macro and updating a couple of places in the
IP and IPv6 stacks that were using this macro to set their own global
variables.

The better solution is to resurrect rwatson's patch that eliminates the
interface queue, and does direct dispatch to the ethernet driver..
Usually the driver has a queue of 512 or more packets already, so putting
them into a second queue doesn't provide much benefit besides increasing
the amount of locking necessary to deliver packets...


Actually I am making this change because I found on 10G hardware the
queue is too small.  Also, there are many systems where you might want
to up this, usually ones that are highly biased towards transmit only,
like a multicast repeater of some sort.



One system I have seen, that I thought made sense used to define
queue length globally in msecs and each interface interpretted
that to a different length.



Best,
George
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]