Re: [zfs-discuss] server-reboot

2007-10-10 Thread eric kustarz
This looks like a bug in the sd driver (SCSI). Does this look familiar to anyway from the sd group? eric On Oct 10, 2007, at 10:30 AM, Claus Guttesen wrote: > Hi. > > Just migrated to zfs on opensolaris. I copied data to the server using > rsync and got this message: > > Oct 10 17:24:04 zetta ^

Re: [zfs-discuss] Is this a bug or a feature ?

2007-10-10 Thread eric kustarz
On Oct 10, 2007, at 11:23 AM, Bernhard Duebi wrote: > Hi everybody, > > I tested the following scenario: > > I have two machine attached to the same SAN LUN. > Both machines run Solaris 10 Update 4. > Machine A is active with zpool01 imported. > Machine B is inactive. > Machine A crashes. > Machi

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread eric kustarz
> > That all said - we don't have a simple dd benchmark for random > seeking. Feel free to try out randomread.f and randomwrite.f - or combine them into your own new workload to create a random read and write workload. eric ___ zfs-discuss mailing

[zfs-discuss] Is this a bug or a feature ?

2007-10-10 Thread Bernhard Duebi
Hi everybody, I tested the following scenario: I have two machine attached to the same SAN LUN. Both machines run Solaris 10 Update 4. Machine A is active with zpool01 imported. Machine B is inactive. Machine A crashes. Machine B imports zpool01 Machine A comes back Now the problem is, that when

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Spencer Shepler
On Oct 10, 2007, at 2:56 AM, Thomas Liesner wrote: > Hi Eric, > >> Are you talking about the documentation at: >> http://sourceforge.net/projects/filebench >> or: >> http://www.opensolaris.org/os/community/performance/filebench/ >> and: >> http://www.solarisinternals.com/wiki/index.php/FileBench

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Spencer Shepler
On Oct 10, 2007, at 8:41 AM, Luke Lonergan wrote: > Hi Eric, > > On 10/10/07 12:50 AM, "eric kustarz" <[EMAIL PROTECTED]> wrote: > >> Since you were already using filebench, you could use the >> 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with >> nthreads set to 20, iosize set to 12

[zfs-discuss] server-reboot

2007-10-10 Thread Claus Guttesen
Hi. Just migrated to zfs on opensolaris. I copied data to the server using rsync and got this message: Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ff0007f1bc80: Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ff0007f1b640 addr=fffecd873000 Oc

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Luke Lonergan
Hi Eric, On 10/10/07 12:50 AM, "eric kustarz" <[EMAIL PROTECTED]> wrote: > Since you were already using filebench, you could use the > 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with > nthreads set to 20, iosize set to 128k) to achieve the same things. Yes but once again we see th

Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-10 Thread J Duff
I've tried to report this bug through the http://bugs.opensolaris.org/ site twice. The first time on September 17, 2007 with the title "ZFS Kernel Crash During Disk Writes (SATA and SCSI)". The second time on September 19, 2007 with the title "ZFS or Storage Subsystem Crashes when Writing to Dis

Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-10 Thread Gary Gendel
I'm not sure. But when I would re-run a scrub, I got the errors at the same block numbers, which indicated that the disk was really bad. It wouldn't hurt to make the entry in the /etc/system file, reboot, and then try the scrub again. If the problem disappears then it is a driver bug. Gary

Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-10 Thread Michael
Thanks. Looks like I have this bug. Is it a hardware problem combined with a software problem? Oct 9 09:35:43 zeta1 sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Oct 9 09:35:43 zeta1 port 3: device reset Oct 9 09:35:43 zeta1 s

Re: [zfs-discuss] Moving default snapshot location

2007-10-10 Thread Darren J Moffat
Walter Faleiro wrote: > Hi, > We have implemented a zfs files system for home directories and have > enabled it with quotas+snapshots. However the snapshots are causing an > issue with the user quotas. The default snapshot files go under > ~username/.zfs/snapshot, which is a part of the user fil

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-10 Thread Vidya Sakar N
> Tell me, is there a way to skip ZFS file system cache or tell me is > there a way to do direct IO on ZFS file system? No, currently there is no way to disable file system cache aka ARC in ZFS. There is a pending RFE though, 6429855 Need way to tell ZFS that caching is a lost cause Cheer

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-10 Thread dudekula mastan
Hi All, Any update on this ? -Masthan D dudekula mastan <[EMAIL PROTECTED]> wrote: Hi Everybody, From the last one week so many mails are exchanged on this topic. I have also one similar issue like this. I will appreciate if any one helps me on this. I have an IO

Re: [zfs-discuss] Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation

2007-10-10 Thread Kugutsumen
Outside of xen ( it was running in a dom0),the zfs root creation executed flawlessly. and it seems that these dma error are caused by xen related isssue :(. There is an old thread talking about DMA issues on machine with lots of ram (8gig here.) Reposting to the Xen mailing list. This mess

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Thomas Liesner
Hi Eric, >Are you talking about the documentation at: >http://sourceforge.net/projects/filebench >or: >http://www.opensolaris.org/os/community/performance/filebench/ >and: >http://www.solarisinternals.com/wiki/index.php/FileBench >? i was talking about the solarisinternals wiki. I can't find any

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread eric kustarz
Since you were already using filebench, you could use the 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with nthreads set to 20, iosize set to 128k) to achieve the same things. With the latest version of filebench, you can then use the '-c' option to compare your results in a nic