Re: [zfs-discuss] Mirrored zpool across network

2007-09-11 Thread Mark
Hey all again, Looking into a few other options. How about infiniband? it would give us more bandwidth, but will it increase complexity/price? any thoughts? Cheers Mark This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-11 Thread Robert Milkowski
Hello Pawel, Monday, September 10, 2007, 6:18:37 PM, you wrote: PJD On Mon, Sep 10, 2007 at 04:31:32PM +0100, Robert Milkowski wrote: Hello Pawel, Excellent job! Now I guess it would be a good idea to get writes done properly, even if it means make them slow (like with SVM).

Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-11 Thread Pawel Jakub Dawidek
On Tue, Sep 11, 2007 at 08:16:02AM +0100, Robert Milkowski wrote: Are you overwriting old data? I hope you're not... I am, I overwrite parity, this is the whole point. That's why ZFS designers used RAIDZ instead of RAID5, I think. I don't think you should suffer from above problem in ZFS due

Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-11 Thread Jeff Bonwick
As you can see, two independent ZFS blocks share one parity block. COW won't help you here, you would need to be sure that each ZFS transaction goes to a different (and free) RAID5 row. This is I belive the main reason why poor RAID5 wasn't used in the first place. Exactly right. RAID-Z

Re: [zfs-discuss] ext3 on zvols journal performance pathologies?

2007-09-11 Thread Darren J Moffat
Joshua Goodall wrote: I've been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap

[zfs-discuss] ZFS but how?

2007-09-11 Thread Oliver Schinagl
Hi, I'm still debating wether I should use ZFS or not and how. Here is my scenario. I want to run a server with a lot of storage, that gets disks added/upgraded from time to time to expand space. I'd want to store large files on it, 15mb - 5gb per file, and they'd only need to be accessible via

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Darren J Moffat
Oliver Schinagl wrote: However, I found on the liveDVD/CD that nexentia and beleniX both don't come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit mode at boottime? Yes Solaris auto detects, but note that some Live systems and install images only do 32bit but will install

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Casper . Dik
However, I found on the liveDVD/CD that nexentia and beleniX both don't come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit mode at boottime? Solaris autodetects the CPU type and boots in 64 bit mode on 64 bit CPUs and in 32 bit mode on 32 bit CPUs. large parts of the

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Oliver Schinagl
[EMAIL PROTECTED] wrote: However, I found on the liveDVD/CD that nexentia and beleniX both don't come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit mode at boottime? Solaris autodetects the CPU type and boots in 64 bit mode on 64 bit CPUs and in 32 bit mode on

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Ian Collins
Oliver Schinagl wrote: [EMAIL PROTECTED] wrote: However, I found on the liveDVD/CD that nexentia and beleniX both don't come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit mode at boottime? Solaris autodetects the CPU type and boots in 64 bit mode on 64

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Ian Collins
Oliver Schinagl wrote: Ian Collins wrote: Oliver Schinagl wrote: once I boot it in 64bit mode, i'd have to run emulation libraries to run 32bit bins right? No. Solaris has both 32 and 64 bit libraries. so you are saying that i can run both 32bit and 64bit code

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Oliver Schinagl
Ian Collins wrote: Oliver Schinagl wrote: Ian Collins wrote: Oliver Schinagl wrote: once I boot it in 64bit mode, i'd have to run emulation libraries to run 32bit bins right? No. Solaris has both 32 and 64 bit libraries. so you are

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Casper . Dik
once I boot it in 64bit mode, i'd have to run emulation libraries to run 32bit bins right? (As I'm a solaris newbie, and only know a little about 64bit stuff from the Linux world, this is all I know) No; you run the exact same binaries and libraries under 32 and 64 bit. It's not emulation; it's

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Casper . Dik
so you are saying that i can run both 32bit and 64bit code simultaneously, natively with the solaris kernel? That's pretty damn cool Correct. There are several reasons for an OS which generally comes in binary distributions to do so: - maintain complete binary compatibility with old

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Oliver Schinagl
Ian Collins wrote: Oliver Schinagl wrote: [EMAIL PROTECTED] wrote: However, I found on the liveDVD/CD that nexentia and beleniX both don't come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit mode at boottime? Solaris autodetects

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Stephen Usher
Oliver Schinagl wrote: not to start a flamewar or the like, but linux can run 32bit bins, just not nativly afaik, you need some sort of emu library. But since I use gentoo, and pretty much everything is compiled from source anyway, I only have stupid closed src bins that would need to work

Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-11 Thread MC
My question is: Is there any interest in finishing RAID5/RAID6 for ZFS? If there is no chance it will be integrated into ZFS at some point, I won't bother finishing it. Your work is as pure an example as any of what OpenSolaris should be about. I think there should be no problem having a new

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Joerg Schilling
Stephen Usher [EMAIL PROTECTED] wrote: Oliver Schinagl wrote: not to start a flamewar or the like, but linux can run 32bit bins, just not nativly afaik, you need some sort of emu library. But since I use gentoo, and pretty much everything is compiled from source anyway, I only have

[zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Dick Davies
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool. Noticed during some performance testing today that its i/o bound but using hardly any CPU, so I thought turning on compression would be a quick win. I know I'll have to copy files for existing data to be compressed,

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Stephen Usher
Joerg Schilling wrote: I am not sure about the current state, but 2 years ago, Linux was only able to run a few simple proramg in 32 bit mode because the drivers did not support 32 bit ioctl interfaces. This made e.g. a 32 bit cdrecord on 64 bit Linux impossible. I've never had a 32bit

Re: [zfs-discuss] ZFS but how?

2007-09-11 Thread Joerg Schilling
Stephen Usher [EMAIL PROTECTED] wrote: Joerg Schilling wrote: I am not sure about the current state, but 2 years ago, Linux was only able to run a few simple proramg in 32 bit mode because the drivers did not support 32 bit ioctl interfaces. This made e.g. a 32 bit cdrecord on 64 bit

Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-11 Thread Wade . Stuart
Invalidating COW filesystem patents would of course be the best. Unfortunately those lawsuits are usually not handled in the open and in order to understand everything you would need to know about the background interests of both parties. IANAL, but I was under the impression that it

Re: [zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Jonathan Adams
On 9/11/07, Dick Davies [EMAIL PROTECTED] wrote: I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool. Noticed during some performance testing today that its i/o bound but using hardly any CPU, so I thought turning on compression would be a quick win. I know I'll

Re: [zfs-discuss] ext3 on zvols journal performance pathologies?

2007-09-11 Thread Richard Elling
Joshua Goodall wrote: I've been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap

Re: [zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Mike DeMarco
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool. Noticed during some performance testing today that its i/o bound but using hardly any CPU, so I thought turning on compression would be a quick win. If it is io bound won't compression make it worse? I

Re: [zfs-discuss] ext3 on zvols journal performance pathologies?

2007-09-11 Thread Bill Moore
I would also suggest setting the recordsize property on the zvol when you create it to 4k, which is, I think, the native ext3 block size. If you don't do this and allow ZFS to use it's 128k default blocksize, then a 4k write from ext3 will turn into a 128k read/modify/write on the ZFS side. This

Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-11 Thread Gino
To put this in perspective, no system on the planet today handles all faults. I would even argue that building such a system is theoretically impossible. no doubt about that ;) So the subset of faults which ZFS covers which is different than the subset that UFS covers and different than

Re: [zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Dick Davies
On 11/09/2007, Mike DeMarco [EMAIL PROTECTED] wrote: I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool. Noticed during some performance testing today that its i/o bound but using hardly any CPU, so I thought turning on compression would be a quick win.

Re: [zfs-discuss] ext3 on zvols journal performance pathologies?

2007-09-11 Thread Eric Schrock
On Tue, Sep 11, 2007 at 01:31:17PM -0700, Bill Moore wrote: I would also suggest setting the recordsize property on the zvol when you create it to 4k, which is, I think, the native ext3 block size. If you don't do this and allow ZFS to use it's 128k default blocksize, then a 4k write from ext3

Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-11 Thread Bill Sommerfeld
On Tue, 2007-09-11 at 13:43 -0700, Gino wrote: -ZFS+FC JBOD: failed hard disk need a reboot :( (frankly unbelievable in 2007!) So, I've been using ZFS with some creaky old FC JBODs (A5200's) and old disks which have been failing regularly and haven't seen that; the worst I've seen

Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-11 Thread Richard Elling
Bill Sommerfeld wrote: On Tue, 2007-09-11 at 13:43 -0700, Gino wrote: -ZFS+FC JBOD: failed hard disk need a reboot :( (frankly unbelievable in 2007!) So, I've been using ZFS with some creaky old FC JBODs (A5200's) and old disks which have been failing regularly and haven't seen

Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-11 Thread Paul Kraus
On 9/11/07, Gino [EMAIL PROTECTED] wrote: -ZFS performs badly with a lot of small files. (about 20 times slower that UFS with our millions file rsync procedures) We have seen just the opposite... we have a server with about 40 million files and only 4 TB of data. We have been

Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-11 Thread David Magda
On Sep 10, 2007, at 13:40, [EMAIL PROTECTED] wrote: I am not against refactoring solutions, but zfs quotas and the lack of user quotas in general either leave people trying to use zfs quotas in lieu of user quotas, suggesting weak end runs against the problem (a cron to calculate