Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Jens Elkner
On Tue, Feb 27, 2007 at 11:35:37AM +0100, Roch - PAE wrote: > > That might be a per pool limitation due to > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622 Not sure - did not use compression feature... > This performance feature was fixed in Nevada last week. > Wo

Re: [zfs-discuss] Re: Are media files compressable with ZFS?

2007-02-27 Thread Ricardo Correia
Hi Tor, Tor wrote: > Dang, I think I'm dead as far as Solaris goes. I checked the HCL and the Java > compatibility check, and none of the two controllers I would need to use, one > PCI IDE and one S-ATA on the KT-4 motherboard, will work with OpenSolaris. > Annoying as heck, but it looks like I

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Jens Elkner
On Mon, Feb 26, 2007 at 06:36:47PM -0800, Richard Elling wrote: > Jens Elkner wrote: > >Currently I'm trying to figure out the best zfs layout for a thumper wrt. > >to read AND write performance. > > First things first. What is the expected workload? Random, sequential, > lots of > little fil

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Rob Logan
> With modern journalling filesystems, I've never had to fsck anything or > run a filesystem repair. Ever. On any of my SAN stuff. you will.. even if the SAN is perfect, you will hit bugs in the filesystem code.. from lots of rsync hard links or like this one from raidtools last week: Feb 9 05

Re: Re[4]: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Erik Trimble
Honestly, no, I don't consider UFS a modern file system. :-) It's just not in the same class as JFS for AIX, xfs for IRIX, or even VxFS. -Erik On Wed, 2007-02-28 at 00:40 +0100, Robert Milkowski wrote: > Hello Erik, > > Tuesday, February 27, 2007, 5:47:42 PM, you wrote: > > ET> > > > ET>

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Jason J. W. Williams
Hi Przemol, I think migration is a really important feature...think I said that... ;-) SAN/RAID is not awful...frankly there's not been better solution (outside of NetApp's WAFL) till ZFS. SAN/RAID just has its own reliability issues you accept unless you don't have toZFS :-) -J On 2/27/07

Re[4]: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Robert Milkowski
Hello Erik, Tuesday, February 27, 2007, 5:47:42 PM, you wrote: ET> ET> The answer is: insufficient data. ET> With modern journalling filesystems, I've never had to fsck anything or ET> run a filesystem repair. Ever. On any of my SAN stuff. I'm not sure if you consider UFS in S10 as a mo

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Jim Mauro
There are 517 threads in the zfs_write code path, so there seems to be a fair amount of write activity happening. The server iostat and zpool iostat data would be interesting, along with the backend storage configuration information - how many spindles, and what kind of zpool config... /jim L

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Dennis Clarke
> > You don't honestly, really, reasonably, expect someone, anyone, to look > at the stack well of course he does :-) and I looked at it .. all of it and I can tell exactly what the problem is but I'm not gonna say because its a trick question. so there. Dennis

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Jim Mauro
You don't honestly, really, reasonably, expect someone, anyone, to look at the stack trace of a few hundred threads, and post something along the lines of "This is what is wrong with your NFS server".Do you? Without any other information at all? We're here to help, but please reset your

[zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Leon Koll
Hello, gurus I need your help. During the benchmark test of NFS-shared ZFS file systems at some moment the number of NFS threads jumps to the maximal value, 1027 (NFSD_SERVERS was set to 1024). The latency also grows and the number of IOPS is going down. I've collected the output of echo "::pgre

Fwd: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Selim Daoud
my mistake, the system is not a Thumper but rather a 6140 disk array, using 4xHBA ports on a T2000 I tried several config and from raid (zfs) , raidz and mirror (zfs) using 8 disks what I observe is a non-continuous stream of data using [zpool] iostat so at some stage the IO is interrupted, dropp

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Richard Elling
Selim Daoud wrote: indeed, a customer is doing 2TB of daily backups on a zfs filesystem the throughput doesn't go above 400MB/s, knowing that at raw speed, the throughput goes up to 800MB/s, the gap is quite wide OK, I'll bite. What is the workload and what is the hardware (zpool) config? A 400

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Selim Daoud
indeed, a customer is doing 2TB of daily backups on a zfs filesystem the throughput doesn't go above 400MB/s, knowing that at raw speed, the throughput goes up to 800MB/s, the gap is quite wide also, sequential IO is a very common in real life..unfortunately zfs is not performing well still sd

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread johansen-osdev
> it seems there isn't an algorithm in ZFS that detects sequential write > in traditional fs such as ufs, one would trigger directio. There is no directio for ZFS. Are you encountering a situation in which you believe directio support would improve performance? If so, please explain. -j ___

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Frank Cusack
all writes in zfs are sequential On February 27, 2007 7:56:58 PM +0100 Selim Daoud <[EMAIL PROTECTED]> wrote: it seems there isn't an algorithm in ZFS that detects sequential write in traditional fs such as ufs, one would trigger directio. qfs can be set to automatically go to directio if seque

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Tuong Lien
how do i remove myself from this [EMAIL PROTECTED] Selim Daoud wrote On 02/27/07 10:56 AM,: it seems there isn't an algorithm in ZFS that detects sequential write in traditional fs such as ufs, one would trigger directio. qfs can be set to automatically go to directio if sequential IO is detec

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Selim Daoud
it seems there isn't an algorithm in ZFS that detects sequential write in traditional fs such as ufs, one would trigger directio. qfs can be set to automatically go to directio if sequential IO is detected. the txg trigger of 5sec is inappropriate in this case (as stated by bug 6415647) even a 1.

Re: [zfs-discuss] Re: Re: Efficiency when reading the same file blocks

2007-02-27 Thread Frank Cusack
On February 27, 2007 10:40:57 AM -0800 Jeff Davis <[EMAIL PROTECTED]> wrote: On February 26, 2007 9:05:21 AM -0800 Jeff Davis But you have to be aware that logically sequential reads do not necessarily translate into physically sequential reads with zfs. zfs I understand that the COW design ca

[zfs-discuss] Re: Re: Efficiency when reading the same file blocks

2007-02-27 Thread Jeff Davis
> On February 26, 2007 9:05:21 AM -0800 Jeff Davis > But you have to be aware that logically sequential > reads do not > necessarily translate into physically sequential > reads with zfs. zfs I understand that the COW design can fragment files. I'm still trying to understand how that would affec

Re: [zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-27 Thread Frank Hofmann
On Tue, 27 Feb 2007, Jeff Davis wrote: Given your question are you about to come back with a case where you are not seeing this? As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O rate drops off quickly when you add processes while reading the same blocks from the

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Richard Elling
[EMAIL PROTECTED] wrote: Is the "true situation" really so bad ? The failure mode is silent error. By definition, it is hard to count silent errors. What ZFS does is improve the detection of silent errors by a rather considerable margin. So, what we are seeing is that suddenly people are see

[zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-27 Thread Jeff Davis
> > Given your question are you about to come back with a > case where you are not > seeing this? > As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O rate drops off quickly when you add processes while reading the same blocks from the same file at the same time. I do

[zfs-discuss] Re: Re: .zfs snapshot directory in all directories

2007-02-27 Thread Eric Haycraft
I am no scripting pro, but I would imagine it would be fairly simple to create a script and batch it to make symlinks in all subdirectories. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail

Re: Re[2]: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Erik Trimble
The answer is: insufficient data. With modern journalling filesystems, I've never had to fsck anything or run a filesystem repair. Ever. On any of my SAN stuff. The sole place I've run into filesystem corruption in the traditional sense is with faulty hardware controllers; and, I'm not ev

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread eric kustarz
On Feb 27, 2007, at 2:35 AM, Roch - PAE wrote: Jens Elkner writes: Currently I'm trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (

Re: [zfs-discuss] Acme WX22B-TR?

2007-02-27 Thread Bev Crair
Best place to check to see if someone's registered whether or not Solaris has been run on a system is at http://www.sun.com/bigadmin/hcl/ Bev. Nicholas Lee wrote: Has anyone run Solaris on one of these: http://acmemicro.com/estore/merchant.ihtml?pid=4014&step=4

Re: [zfs-discuss] File System Filter Driver??

2007-02-27 Thread Rayson Ho
On 2/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote: The Availability Suite product set (http://www.opensolaris.org/os/project/avs/) offers both snapshot and data replication data services, both of which are built on top of a Solaris filter driver framework. Is the Solaris filter driver framework

Re[2]: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Robert Milkowski
Hello przemolicc, Tuesday, February 27, 2007, 11:28:59 AM, you wrote: ppf> On Tue, Feb 27, 2007 at 08:29:04PM +1100, Shawn Walker wrote: >> On 27/02/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: >> >On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote: >> >> Hi Przemol, >> >> >

[zfs-discuss] Re: 2-way mirror or RAIDZ?

2007-02-27 Thread Chris Gerhard
As has been pointed out you want to mirror (or get more disks). I would suggest you think carefully about the layout of the disks so that you can take advantage of ZFS boot when it arrives. See http://blogs.sun.com/chrisg/entry/new_server_arrived for a suggestion. --chris This message po

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread Roch - PAE
Jens Elkner writes: > Currently I'm trying to figure out the best zfs layout for a thumper wrt. to > read AND write performance. > > I did some simple mkfile 512G tests and found out, that per average ~ > 500 MB/s seems to be the maximum on can reach (tried initial default > setup, all 4

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread przemolicc
On Tue, Feb 27, 2007 at 08:29:04PM +1100, Shawn Walker wrote: > On 27/02/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > >On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote: > >> Hi Przemol, > >> > >> I think Casper had a good point bringing up the data integrity > >> features

[zfs-discuss] Re: Re: ARGHH. An other panic!!

2007-02-27 Thread Gino Ruopolo
Hi Jason, we done the tests using S10U2, two fc cards, MPXIO. 5 LUN in a raidZ group. Each LUN was visible to both the fc card. Gino > Hi Gino, > > Was there more than one LUN in the RAID-Z using the > port you disabled? > This message posted from opensolaris.org _

Re: [zfs-discuss] 2-way mirror or RAIDZ?

2007-02-27 Thread Trevor Watson
Thanks Constantin, that was just the information I needed! Trev Constantin Gonzalez wrote: Hi, I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks. congratulations, this is a great machine! I want to make best use of the available disk space and have some level of redundancy wi

Re: [zfs-discuss] 2-way mirror or RAIDZ?

2007-02-27 Thread Constantin Gonzalez
Hi, > I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks. congratulations, this is a great machine! > I want to make best use of the available disk space and have some level > of redundancy without impacting performance too much. > > What I am trying to figure out is: would it be be

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Shawn Walker
On 27/02/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote: > Hi Przemol, > > I think Casper had a good point bringing up the data integrity > features when using ZFS for RAID. Big companies do a lot of things > "just because tha

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread przemolicc
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote: > Hi Przemol, > > I think Casper had a good point bringing up the data integrity > features when using ZFS for RAID. Big companies do a lot of things > "just because that's the certified way" that end up biting them in the > rea

[zfs-discuss] 2-way mirror or RAIDZ?

2007-02-27 Thread Trevor Watson
I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks. I want to make best use of the available disk space and have some level of redundancy without impacting performance too much. What I am trying to figure out is: would it be better to have a simple mirror of an identical 200Gb sli