6.1 quota bugs cause adaptec 2820sa kernel to crash ?

2006-06-22 Thread Ensel Sharon

FreeBSD 6.1-RELEASE, p4 xeon system, adaptec 2820sa SATA raid controller
with all 8 disks in use.

Two arrays are present - a mirror to boot from, and a 6-disk raid6 array
of size 1.8TB.

I am aware that there are problems with quotas in 6.1, but I am
successfully using them with 6.0-RELEASE and an adaptec 1610sa.  I figured
it couldn't be any worse...

-

After loading, the system frequently (multi-daily) crashed with the error:

Warning! Controller is no longer running!  code=0xbcef0100

(after a page or so of aac0 timeout messages)

So I disabled quotas on the system, and it has been completely stable ever
since.

-

Does anyone understand the mechanism wherein the 6.1 quota problems could
cause a raid controller kernel to crash ?  Is that possible ?  (it seems
to be)

Are there workaround for the 6.1 quota problems, as in they don't work,
but if you sysctl this and set this, they will ?

How close are we to working quota code on 6.1, and will I be able to just
patch some files in /usr/src and recompile my kernel, or will it require a
full rebuild of the OS ?

Thanks.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: GBDE mounts on top of other mounts (sshfs ?) fail ... please help

2006-03-17 Thread Ensel Sharon


On Fri, 17 Mar 2006, Anish Mistry wrote:

 On Thursday 16 March 2006 19:45, Ensel Sharon wrote:
  I have successfully configured and used a GBDE.  I followed these
  instructions:
 
  http://0x06.sigabrt.de/howtos/freebsd_encrypted_image_howto.html
 
  Easy.  No problem.
 
  However, when I place the backing-store-file on a mounted sshfs
  (fuse) volume, it no longer works.  Specifically, when I issue
  command:
 
  gbde init /dev/md0 -i -L /etc/gbde/md0
 
  and save the resulting file that opens in my editor (without making
  any changes, as usual), after typing in my passphrase twice, I get
  this error:
 
  Enter new passphrase:
  Reenter new passphrase:
  gbde: write: Input/output error
  #
 
 
  Is this expected ?  Is this a specific problem with fuse-fs, or
  would this fail if I tried to put the backing store on any kind of
  other mounted abnormal filesystem ? (say an NFS mount, or another
  md-backed mount point)
 
  Any comments ?  I would really like  to get this to work and would
  be happy to run more tests if someone could suggest some.

 I don't have an answer for you, but you may want to contact Csaba Henk 
 [EMAIL PROTECTED] about the fuse stuff.


I don't think this is a FUSE problem (someone correct me if I am
wrong) ... I don't know a lot about unionfs, but does this involve unionfs
since I am putting a mount point on top of a mount point ?  (md mount on
top of a sshfs mount) ?

I was reading about how there are problems with unionfs on FreeBSD 6.0,
and I wonder if these are those problems, or if I am experiencing
something completely different ?

thank uyou.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


GBDE mounts on top of other mounts (sshfs ?) fail ... please help

2006-03-16 Thread Ensel Sharon

I have successfully configured and used a GBDE.  I followed these
instructions:

http://0x06.sigabrt.de/howtos/freebsd_encrypted_image_howto.html

Easy.  No problem.

However, when I place the backing-store-file on a mounted sshfs
(fuse) volume, it no longer works.  Specifically, when I issue command:

gbde init /dev/md0 -i -L /etc/gbde/md0

and save the resulting file that opens in my editor (without making any
changes, as usual), after typing in my passphrase twice, I get this error:

Enter new passphrase:
Reenter new passphrase: 
gbde: write: Input/output error
#


Is this expected ?  Is this a specific problem with fuse-fs, or would this
fail if I tried to put the backing store on any kind of other mounted
abnormal filesystem ? (say an NFS mount, or another md-backed mount point)

Any comments ?  I would really like  to get this to work and would be
happy to run more tests if someone could suggest some.

thanks.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


mount_nullfs removes (temporarily) schg flag from dir ... why ?

2006-02-01 Thread Ensel Sharon

If you set schg on a directory, and then make that directory the mount
point of a null_mount, the schg flag goes away.

When you unmount it, it returns.  Why is this ?

Would I see this behavior from mounting any kind of mount point on that
directory, or just when mounting a null mount on it ?

Thanks.


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: tuning to run large (1000+) numbers of null_mounts

2006-01-19 Thread Ensel Sharon


On Wed, 18 Jan 2006, Kris Kennaway wrote:

  hmmm...the cut and paste of that loud warning was from a 6.0-RELEASE man
  page ... if I need to be CURRENT to get the updated man page, do I also
  need to be CURRENT to get the safe null_mount code itself ?
  
  Or is 6.0-RELEASE safe ? (re: null_mount)
  
  Thanks a lot.
 
 6.0-RELEASE is also safe.  I only just removed the warning the other
 day, but I'll also be merging it to 6.0-STABLE.


Ok, that is good to know.

However, I continue to see instability on this system with the 2000+
null_mounts.  Are there any system tunables / sysctls / kernel
configurations that I should be studying or experimenting with that are
relevant to this ?

Perhaps looking more broadly, are there any tunables related to large
numbers of mounted filesystems _period_, not just null mounts ?

For what it is worth, the system also has several mdconfig'd and mounted
snapshots (more than 5, less than 10).  Further, all of the null mounts
mount space from within a mounted snapshot into normal filesystem space.  
With all the snapshots mounted and all the null mounts mounted, I find
that commencing an rsync from the filesystem that all these exist on locks
up the machine.  I can still ping it, but it refuses all connections.  It
requires a power cycle.

Comments ?

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


tuning to run large (1000+) numbers of null_mounts

2006-01-18 Thread Ensel Sharon

I am running over 2000 null mounts on a FreeBSD 6.0-RELEASE system.  I am
well aware of the status of the null_mount code, as advertised in the
mount_nullfs man page:

 THIS FILE SYSTEM TYPE IS NOT YET FULLY SUPPORTED (READ: IT DOESN'T
WORK)
 AND USING IT MAY, IN FACT, DESTROY DATA ON YOUR SYSTEM.  USE AT YOUR
OWN
 RISK.  BEWARE OF DOG.  SLIPPERY WHEN WET.

So, based on this, all of my null_mounts are actually mounted read-only.

I am noticing both system instability and data corruption issues on disk
following crashes.  Both of these seem to be related to the null
mounts.  I have two very quick questions:

1. Is my theory that using only read-only null mounts is a good way to
avoid data corruption a sound one ?  Or can even a read-only null mount do
bad things to underlying data ?

2. What system tunes / alterations would be appropriate for a system
running several thousand (although, always less than 5000) null mounted
filesystems ?

Please relay _any and all_ comments, suggestions, war stories,
rumors.  They are very appreciated.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: tuning to run large (1000+) numbers of null_mounts

2006-01-18 Thread Ensel Sharon


On Wed, 18 Jan 2006, Kris Kennaway wrote:

   RISK.  BEWARE OF DOG.  SLIPPERY WHEN WET.
  
  So, based on this, all of my null_mounts are actually mounted read-only.
 
 As others have said, this is no longer applicable to FreeBSD 6.0, and
 it's been removed from HEAD.


hmmm...the cut and paste of that loud warning was from a 6.0-RELEASE man
page ... if I need to be CURRENT to get the updated man page, do I also
need to be CURRENT to get the safe null_mount code itself ?

Or is 6.0-RELEASE safe ? (re: null_mount)

Thanks a lot.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]