Re: possible zfs bug? lost all pools

2008-05-18 Thread Claus Guttesen
>> Did you run '/etc/rc.d.hostid start' first?
>> IIRC, it is needed before zfs will mount in single-user mode.
>
> Just curious, as I've been wanting to fiddle around with ZFS in my spare
> time...  what is the solution here if you have failed hardware and you want
> to move your ZFS disks to another system (with a different host id)?

'zpool import' possible with a -f.  man 1m zpool.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Disk access/MPT under ESX3.5

2008-05-18 Thread Shunsuke SHINOMIYA

> da0 at mpt0 bus 0 target 0 lun 0
> da0:  Fixed Direct Access SCSI-2 device
> da0: 3.300MB/s transfers
> da0: Command Queueing Enabled
> da0: 34816MB (71303168 512 byte sectors: 255H 63S/T 4438C)

 Can you re-negotiate transfer rate, using camcontrol?
 `camcontrol negotiate 0:0 -W 16' might improve that transfer settings.
 
 camcontrol needs passthrough device support in a kernel.


-- 
Shunsuke SHINOMIYA <[EMAIL PROTECTED]>

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: possible zfs bug? lost all pools

2008-05-18 Thread Charles Sprickman

On Sun, 18 May 2008, Torfinn Ingolfsen wrote:


On Sun, 18 May 2008 09:56:17 -0300
JoaoBR <[EMAIL PROTECTED]> wrote:


after trying to mount my zfs pools in single user mode I got the
following message for each:

May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
loaded as it was last accessed by another system (host:
gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
http://www.sun.com/msg/ZFS-8000-EY


Did you run '/etc/rc.d.hostid start' first?
IIRC, it is needed before zfs will mount in single-user mode.


Just curious, as I've been wanting to fiddle around with ZFS in my spare 
time...  what is the solution here if you have failed hardware and you 
want to move your ZFS disks to another system (with a different host id)?


Thanks,

Charles


HTH
--
Regards,
Torfinn Ingolfsen

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Auto bridge for qemu network

2008-05-18 Thread Maho NAKATA
From: bazzoola <[EMAIL PROTECTED]>
Subject: Auto bridge for qemu network [was: kqemu support: not compiled]
Date: Thu, 15 May 2008 03:06:25 -0400

> Also, is it possible to update this page, it has some outdated info:
> http://people.freebsd.org/~maho/qemu/qemu.html
> *It is the 1st answer from google when asked "freebsd qemu"

Sorry i have been very lazy for this ...
I'll update (migrate to wiki) hopefully soon...

-- Nakata Maho http://accc.riken.jp/maho/


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Status of ZFS in -stable?

2008-05-18 Thread Zaphod Beeblebrox
On Wed, May 14, 2008 at 4:35 PM, Marc UBM Bocklet <[EMAIL PROTECTED]> wrote:

> On Tue, 13 May 2008 00:26:49 -0400
> Pierre-Luc Drouin <[EMAIL PROTECTED]> wrote:
>
> > I would like to know if the memory allocation problem with zfs has
> > been fixed in -stable? Is zfs considered to be more "stable" now?
> >
> > Thanks!
> > Pierre-Luc Drouin
>
> We just set up a zfs based fileserver in our home. It's accessed via
> samba and ftp, connected via an em 1gb card.
> FreeBSD is installed on an 80GB ufs2 disk, the zpool consists of two
> 750GB disks, set up as raidz (my mistake, mirror would probably have
> been the better choice).
> We've been using it for about 2 weeks now and there have been no
> problems (transferred lots of big and small files off/on it, maxing out
> disk speed).


For standard filestore, Samba/NFS has worked fine.  However,  when using
Norton Ghost to make backup snapshots, the files (on ZFS) come out
corrupt.They are not corrupt on UFS backed SAMBA service.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Packet-corruption with re(4)

2008-05-18 Thread Peter Ankerstål


On Apr 29, 2008, at 2:08 PM, Jeremy Chadwick wrote:





I'd recommend staying away from Realtek NICs.  Pick up an Intel Pro/ 
1000

GT or PT.  Realtek has a well-known history of issues.


Just wanted to tell you guys that so far a em(4) seems to have fixed  
the problem.

--
Peter Ankerstål
[EMAIL PROTECTED]


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Disk access/MPT under ESX3.5

2008-05-18 Thread Daniel Ponticello



Clifton Royston ha scritto:


If you are accessing a software emulation of a SCSI disk, I would
offhand expect the CPU load to go up substantially when you are reading
or writing it at the maximum achievable bandwidth.  You can't expect
normal relative load results under an emulator, and while most
application or kernel code runs natively, I/O under VMWare will zoom in
and out of the emulator a lot.  I'm afraid I can't give you a
definitive answer as I have VMWare but haven't set up FreeBSD under it
yet.
 
  -- Clifton
  

Thanks Clifton,
my problem is that system (console) becomes very unresponsive when I/O 
is writing at maximum bandwidth.
Anyway, system becomes more responsive when using ULE scheduler instead 
of 4BSD during I/O.

Is there a way to limit the maximum I/O bandwidth used by the controller?

Daniel

--

WBR,
Cordiali Saluti,

Daniel Ponticello, VP of Engineering

Network Coordination Centre of Skytek

---
- For further information about our services: 
- Please visit our website at http://www.Skytek.it

---

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Apache seg faults -- Possible problem with libc? [solved]

2008-05-18 Thread Norbert Papke
On May 17, 2008, Norbert Papke wrote:
> Environment:  FreeBSD 7.0 Stable (as of Apr 30), apache-2.0.63
>
> I am experiencing Apache crashes on a fairly consistent and frequent basis.
> The crash occurs in strncmp().  To help with the diagnosis, I have rebuilt
> libc with debug symbols.  Here is a typical stack dump:
>
>   #0  strncmp () at /usr/src/lib/libc/i386/string/strncmp.S:69
>   #1  0x2832558c in getenv (name=0x28338648 "TZ")
>  at /usr/src/lib/libc/stdlib/getenv.c:144
>   #2  0x2830ce3a in tzset_basic (rdlocked=0)
>  at /usr/src/lib/libc/stdtime/localtime.c:1013
>   #3  0x2830d42f in localtime (timep=0xbfbfc1d4)
>  at /usr/src/lib/libc/stdtime/localtime.c:1158

The problem is not in libc.  Instead it is caused by Apache's PHP5 module.  
Under certain circumstances, the module will allocate memory for an 
environment variable, pass this variable to putenv(), and then immediately 
free the memory.  putenv(), of course, requires the environment variable to 
remain valid.  The seg fault occurs at a subsequent getenv() invocation.

I have contacted the PHP5 maintainer with this information.

Best,

-- Norbert.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Disk access/MPT under ESX3.5

2008-05-18 Thread Clifton Royston
On Sun, May 18, 2008 at 04:15:55PM +0200, Daniel Ponticello wrote:
> Hello,
> i'm running some tests with FreeBSD6.3 and FreeBSD7-Stable, using both 
> AMD64 and I386 arch
> with both schedulers (ULE and 4BSD) on VmWare ESX3.5 server.
> Everything runs almost fine, except for disk access. Performance is 
> quite OK (around 60mb/sec),
> but when accessing disks, System (kernel) CPU load goes up to 70%. This 
> doesn't look
> normal. The same behavior is present on all test configurations.
...
> da0 at mpt0 bus 0 target 0 lun 0
> da0:  Fixed Direct Access SCSI-2 device
> da0: 3.300MB/s transfers
> da0: Command Queueing Enabled
> da0: 34816MB (71303168 512 byte sectors: 255H 63S/T 4438C)
>
> Any suggestions?

If you are accessing a software emulation of a SCSI disk, I would
offhand expect the CPU load to go up substantially when you are reading
or writing it at the maximum achievable bandwidth.  You can't expect
normal relative load results under an emulator, and while most
application or kernel code runs natively, I/O under VMWare will zoom in
and out of the emulator a lot.  I'm afraid I can't give you a
definitive answer as I have VMWare but haven't set up FreeBSD under it
yet.
 
  -- Clifton

-- 
Clifton Royston  --  [EMAIL PROTECTED] / [EMAIL PROTECTED]
   President  - I and I Computing * http://www.iandicomputing.com/
 Custom programming, network design, systems and network consulting services
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: possible zfs bug? lost all pools

2008-05-18 Thread JoaoBR
On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote:
> On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote:
> > On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote:
> > > On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
> > > > after trying to mount my zfs pools in single user mode I got the
> > > > following message for each:
> > > >
> > > > May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
> > > > loaded as it was last accessed by another system (host:
> > > > gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
> > > > http://www.sun.com/msg/ZFS-8000-EY
> > > >
> > > > any zpool cmd returned nothing else as not existing zfs, seems the
> > > > zfs info on disks was gone
> > > >
> > > > to double-check I recreated them, rebooted in single user mode and
> > > > repeated the story, same thing, trying to /etc/rc.d/zfs start
> > > > returnes the above msg and pools are gone ...
> > > >
> > > > I guess this is kind of wrong
> > >
> > > I think that the problem is related to the absence of a hostid when in
> > > single-user.  Try running '/etc/rc.d/hostid start' before mouning.
> >
> > well, obviously that came to my mind after seeing the msg ...
> >
> > anyway the pools should not vanish don't you agree?
> >
> > and if necessary /etc/rc.d/zfs should start hostid or at least set
> > REQUIRE different and warn
>
> I've been in the same boat you are, and I was told the same thing.  I've
> documented the situation on my Wiki, and the necessary workarounds.
>
> http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issues
>

nice work this page, thanks

> This sort of thing needs to get hammered out before ZFS can be
> considered "usable" from a system administration perspective.  Expecting
> people to remember to run an rc.d startup script before they can use any
> of their filesystems borders on unrealistic.

yes but on the other side we know it is new stuff and sometimes the price is 
what happens to me this morning but then it also helps to make things better

anyway, a little fix to rc.d/zfs like

if [ ! "`sysctl -n kern.hostiid 2>/dev/null`" ]; then echo "zfs needs hostid 
first"; exit 0; fi

or something as precmd or first in zfs_start_main should fix this issue


talking about there are more things, I experienced still not working


swapon|off from rc.d/zfs script does not work either

not sure what it is because same part of script run as root works, adding a 
dash to #!/bin/sh does not help either, from rc.d/zfs the state returns a 
dash

do not see sense in rc.d/zfs `zfs share` since it is the default when sharenfs 
property is enabled

man page tipo tells swap -a ... not swapon -a

subcommands volini and volfini not in manual at all

man page thar zfs can not be a dump device, not sure if I understand it as 
meant but I can dump to zfs very well and fast as long as recordsize=128

but at the end, for the short time zfs is there it gives me respectable 
performance results and it is stable for me as well

-- 

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: connect(): Operation not permitted

2008-05-18 Thread Kian Mohageri
On Sun, May 18, 2008 at 3:33 AM, Johan Ström <[EMAIL PROTECTED]> wrote:
> On May 18, 2008, at 9:19 AM, Matthew Seaman wrote:
>
>> Johan Ström wrote:
>>
>>> drop all traffic)? A check with pfctl -vsr reveals that the actual rule
>>> inserted is "pass on lo0 inet from 123.123.123.123 to 123.123.123.123 flags
>>> S/SA keep state". Where did that "keep state" come from?
>>
>> 'flags S/SA keep state' is the default now for tcp filter rules -- that
>> was new in 7.0 reflecting the upstream changes made between the 4.0 and
>> 4.1
>> releases of OpenBSD.  If you want a stateless rule, append 'no state'.
>>
>> http://www.openbsd.org/faq/pf/filter.html#state
>
> Thanks! I was actually looking around in the pf.conf manpage but failed to
> find it yesterday, but looking closer today I now saw it.
> Applied the no state (and quick) to the rule, and now no state is created.
> And the problem I had in the first place seems to have been resolved too
> now, even though it didn't look like a state problem.. (started to deny new
> connections much earlier than the states was full, altough maybee i wasnt
> looking for updates fast enough or something).
>

I'd be willing to bet it's because you're reusing the source port on a
new connection before the old state expires.

You'll know if you check the state-mismatch counter.

Anyway, glad you found a resolution.

-Kian
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Re: possible zfs bug? lost all pools

2008-05-18 Thread Jeremy Chadwick
On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote:
> On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote:
> > On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
> > > after trying to mount my zfs pools in single user mode I got the
> > > following message for each:
> > >
> > > May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
> > > loaded as it was last accessed by another system (host:
> > > gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
> > > http://www.sun.com/msg/ZFS-8000-EY
> > >
> > > any zpool cmd returned nothing else as not existing zfs, seems the zfs
> > > info on disks was gone
> > >
> > > to double-check I recreated them, rebooted in single user mode and
> > > repeated the story, same thing, trying to /etc/rc.d/zfs start returnes
> > > the above msg and pools are gone ...
> > >
> > > I guess this is kind of wrong
> >
> > I think that the problem is related to the absence of a hostid when in
> > single-user.  Try running '/etc/rc.d/hostid start' before mouning.
> 
> well, obviously that came to my mind after seeing the msg ...
> 
> anyway the pools should not vanish don't you agree?
> 
> and if necessary /etc/rc.d/zfs should start hostid or at least set REQUIRE 
> different and warn

I've been in the same boat you are, and I was told the same thing.  I've
documented the situation on my Wiki, and the necessary workarounds.

http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issues

This sort of thing needs to get hammered out before ZFS can be
considered "usable" from a system administration perspective.  Expecting
people to remember to run an rc.d startup script before they can use any
of their filesystems borders on unrealistic.

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: possible zfs bug? lost all pools

2008-05-18 Thread JoaoBR
On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote:
> On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
> > after trying to mount my zfs pools in single user mode I got the
> > following message for each:
> >
> > May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
> > loaded as it was last accessed by another system (host:
> > gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
> > http://www.sun.com/msg/ZFS-8000-EY
> >
> > any zpool cmd returned nothing else as not existing zfs, seems the zfs
> > info on disks was gone
> >
> > to double-check I recreated them, rebooted in single user mode and
> > repeated the story, same thing, trying to /etc/rc.d/zfs start returnes
> > the above msg and pools are gone ...
> >
> > I guess this is kind of wrong
>
> I think that the problem is related to the absence of a hostid when in
> single-user.  Try running '/etc/rc.d/hostid start' before mouning.
>

well, obviously that came to my mind after seeing the msg ...

anyway the pools should not vanish don't you agree?

and if necessary /etc/rc.d/zfs should start hostid or at least set REQUIRE 
different and warn




thank's
-- 

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Disk access/MPT under ESX3.5

2008-05-18 Thread Daniel Ponticello

Hello,
i'm running some tests with FreeBSD6.3 and FreeBSD7-Stable, using both 
AMD64 and I386 arch

with both schedulers (ULE and 4BSD) on VmWare ESX3.5 server.
Everything runs almost fine, except for disk access. Performance is 
quite OK (around 60mb/sec),
but when accessing disks, System (kernel) CPU load goes up to 70%. This 
doesn't look

normal. The same behavior is present on all test configurations.

monitor# dmesg | grep mpt
mpt0:  port 0x1080-0x10ff mem 
0xf481-0xf4810fff irq 17 at device 16.0 on pci0

mpt0: [ITHREAD]
mpt0: MPI Version=1.2.0.0

da0 at mpt0 bus 0 target 0 lun 0
da0:  Fixed Direct Access SCSI-2 device
da0: 3.300MB/s transfers
da0: Command Queueing Enabled
da0: 34816MB (71303168 512 byte sectors: 255H 63S/T 4438C)


Any suggestions?

Thanks,

Daniel

--

WBR,
Cordiali Saluti,

Daniel Ponticello, VP of Engineering

Network Coordination Centre of Skytek

---
- For further information about our services: 
- Please visit our website at http://www.Skytek.it

---

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: possible zfs bug? lost all pools

2008-05-18 Thread Torfinn Ingolfsen
On Sun, 18 May 2008 09:56:17 -0300
JoaoBR <[EMAIL PROTECTED]> wrote:

> after trying to mount my zfs pools in single user mode I got the
> following message for each:
> 
> May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
> loaded as it was last accessed by another system (host:
> gw.bb1.matik.com.br hostid: 0xbefb4a0f).  See:
> http://www.sun.com/msg/ZFS-8000-EY

Did you run '/etc/rc.d.hostid start' first?
IIRC, it is needed before zfs will mount in single-user mode.

HTH
-- 
Regards,
Torfinn Ingolfsen

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: possible zfs bug? lost all pools

2008-05-18 Thread Greg Byshenk
On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
 
> after trying to mount my zfs pools in single user mode I got the following 
> message for each:
> 
> May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as 
> it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 
> 0xbefb4a0f).  See: http://www.sun.com/msg/ZFS-8000-EY
> 
> any zpool cmd returned nothing else as not existing zfs, seems the zfs info 
> on 
> disks was gone
> 
> to double-check I recreated them, rebooted in single user mode and repeated 
> the story, same thing, trying to /etc/rc.d/zfs start returnes the above msg 
> and pools are gone ...
> 
> I guess this is kind of wrong 


I think that the problem is related to the absence of a hostid when in
single-user.  Try running '/etc/rc.d/hostid start' before mouning.

http://lists.freebsd.org/pipermail/freebsd-current/2007-July/075001.html


-- 
greg byshenk  -  [EMAIL PROTECTED]  -  Leiden, NL
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


possible zfs bug? lost all pools

2008-05-18 Thread JoaoBR

after trying to mount my zfs pools in single user mode I got the following 
message for each:

May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as 
it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 
0xbefb4a0f).  See: http://www.sun.com/msg/ZFS-8000-EY

any zpool cmd returned nothing else as not existing zfs, seems the zfs info on 
disks was gone

to double-check I recreated them, rebooted in single user mode and repeated 
the story, same thing, trying to /etc/rc.d/zfs start returnes the above msg 
and pools are gone ...

I guess this is kind of wrong 


-- 

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: connect(): Operation not permitted

2008-05-18 Thread Johan Ström

On May 18, 2008, at 9:19 AM, Matthew Seaman wrote:


Johan Ström wrote:

drop all traffic)? A check with pfctl -vsr reveals that the actual  
rule inserted is "pass on lo0 inet from 123.123.123.123 to  
123.123.123.123 flags S/SA keep state". Where did that "keep state"  
come from?


'flags S/SA keep state' is the default now for tcp filter rules --  
that
was new in 7.0 reflecting the upstream changes made between the 4.0  
and 4.1

releases of OpenBSD.  If you want a stateless rule, append 'no state'.

http://www.openbsd.org/faq/pf/filter.html#state


Thanks! I was actually looking around in the pf.conf manpage but  
failed to find it yesterday, but looking closer today I now saw it.
Applied the no state (and quick) to the rule, and now no state is  
created.
And the problem I had in the first place seems to have been resolved  
too now, even though it didn't look like a state problem.. (started to  
deny new connections much earlier than the states was full, altough  
maybee i wasnt looking for updates fast enough or something).


Anyways, thanks to all helping me out, and of course thanks to  
everybody involved in FreeBSD/pf and all for great products! Cannot be  
said enough times ;)___

freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: how much memory does increasing max rules for IPFW take up?

2008-05-18 Thread Ian Smith
On Fri, 16 May 2008, Vivek Khera wrote:

 > How are the buckets used?  Are they hashed per rule number or some  
 > other mechanism?  Nearly all of my states are from the same rule (eg,  
 > on a mail server for the SMTP port rule).

/sys/netinet/ip_fw.h
/sys/netinet/ip_fw2.c

Hashed per flow, (srcip^destip^srcport^dstport) mod curr_dyn_buckets, so
packets for both directions of a given flow hash to the same bucket.  In
the case you mention, you could likely expect reasonable distribution by
src_ip/src_port.

The rule number doesn't contribute to the hash, but is contained in the
dynamic rule entry, ie a matched flow resolves to its rule at the first
check_state or keep_state rule encountered.  Try searching for '_STATE'.

Each bucket just contains a pointer, so on i386 I'd expect 1KB per 256
buckets, see realloc_dynamic_table.  The 'pointees', ipfw_dyn_rule, are
around 70? bytes each with 32-bit pointers, so 4K current dynamic rules
should use around 280KB?  Somebody yell if I'm badly miscalculating ..

 > How should I scale the buckets with the max rules?  The default seems  
 > to be 4096 rules and 256 buckets.  Should I maintain that ratio?

Sounds reasonable.  Extra buckets look cheap, if I'm reading it right,
and memory otherwise appears to be only allocated on use, per new flow,
but I'm ignorant of any other memory allocation overheads.

caveats: 5.5 sources; C is read-only here; not subscribed to -ipfw

cheers, Ian

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: connect(): Operation not permitted

2008-05-18 Thread Matthew Seaman

Johan Ström wrote:

drop all traffic)? A check with pfctl -vsr reveals that the actual rule 
inserted is "pass on lo0 inet from 123.123.123.123 to 123.123.123.123 
flags S/SA keep state". Where did that "keep state" come from?


'flags S/SA keep state' is the default now for tcp filter rules -- that
was new in 7.0 reflecting the upstream changes made between the 4.0 and 4.1
releases of OpenBSD.  If you want a stateless rule, append 'no state'.

http://www.openbsd.org/faq/pf/filter.html#state

Cheers,

Matthew

--
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature