Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent

2009-10-23 Thread Gaëtan Lehmann
Le 23 oct. 09 à 08:46, Stathis Kamperis a écrit : 2009/10/23 michael schuster : Stathis Kamperis wrote: Salute. I have a filesystem where I store various source repositories (cvs + git). I have compression enabled on and zfs get compressratio reports 1.46x. When I copy all the stuff to a

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
Hi Cindy, I have a couple of questions about this issue : 1. i have exactly the same LSI controller in another server running opensolaris snv_101b, and so far no errors like this ones where seen in the system 2. up to snv_118 i haven't seen any problems, only now within snv_125 3

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
Hi Adam, How many disks and zpoo/zfs's do you have behind that LSI? I have a system with 22 disks and 4 zpools with around 30 zfs's and so far it works like a charm, even during heavy load. The opensolaris release is snv_101b . Bruno Adam Cheal wrote: Cindy: How can I view the bug report you

Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent

2009-10-23 Thread Stathis Kamperis
2009/10/23 Gaëtan Lehmann : > > Le 23 oct. 09 à 08:46, Stathis Kamperis a écrit : > >> 2009/10/23 michael schuster : >>> >>> Stathis Kamperis wrote: Salute. I have a filesystem where I store various source repositories (cvs + git). I have compression enabled on and zfs get

[zfs-discuss] ZFS port to Linux

2009-10-23 Thread Anand Mitra
Hi All, At KQ Infotech, we have always looked at challenging ourselves by trying to scope out new technologies. Currently we are porting ZFS to Linux and would like to share our progress and the challenges faced, we would also like to know your thoughts/inputs regarding our efforts. Though we are

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Darren J Moffat
Anand Mitra wrote: Hi All, At KQ Infotech, we have always looked at challenging ourselves by trying to scope out new technologies. Currently we are porting ZFS to Linux and would like to share our progress and the challenges faced, we would also like to know your thoughts/inputs regarding our ef

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Joerg Schilling
Darren J Moffat wrote: > > One of the biggest questions around this effort would be ?licensing?. > > As far as our understanding goes; CDDL doesn?t restrict us from > > modifying ZFS code and releasing it. However GPL and CDDL code cannot > > be mixed, which implies that ZFS cannot be compiled in

[zfs-discuss] zfs recv complains about destroyed filesystem

2009-10-23 Thread Robert Milkowski
Hi, snv_123, x64 zfs recv -F complains it can't open a snapshot it just destroyed itself as it was destroyed on a sending side. Other than complaining about it it finishes successfully. Below is an example where I created a filesystem fs1 with three snapshots of it called snap1, snap2, snap3.

[zfs-discuss] problems with netatalk and zfs after upgrade to snv_125

2009-10-23 Thread dirk schelfhout
( sry for cross post , I posted this in opensolaris discuss. but I think it belongs here ) I can no longer mount 1 of my 2 volumes. they are both on zfs. I can still mount my home, which is on rpool. but can not mount my data which is on a raidz pool. setting are the same. this is from AppleVolum

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
Our config is: OpenSolaris snv_118 x64 1 x LSISAS3801E controller 2 x 23-disk JBOD (fully populated, 1TB 7.2k SATA drives) Each of the two external ports on the LSI connects to a 23-disk JBOD. ZFS-wise we use 1 zpool with 2 x 22-disk raidz2 vdevs (1 vdev per JBOD). Each zpool has one ZFS filesyst

[zfs-discuss] bewailing of the n00b

2009-10-23 Thread Robert
I am in the beginning stage of converting multiple two-drive NAS devices to a more proper single-device storage solution for my home network. Because I have a pretty good understanding of hardware-based storage solutions, originally I was going to go with a a traditional server-class motherboar

[zfs-discuss] Resilver speed

2009-10-23 Thread Arne Jansen
Hi, I have a pool of 22 1T SATA disks in a RAIDZ3 configuration. It is filled with files of an average size of 2MB. I filled it randomly to resemble the expected workload in production use. Problems arise when I try to scrub/resilver this pool. This operation takes the better part of a week (!)

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Bob Friesenhahn
On Fri, 23 Oct 2009, Anand Mitra wrote: One of the biggest questions around this effort would be “licensing”. As far as our understanding goes; CDDL doesn’t restrict us from modifying ZFS code and releasing it. However GPL and CDDL code cannot be mixed, which implies that ZFS cannot be compiled

Re: [zfs-discuss] Resilver speed

2009-10-23 Thread Bob Friesenhahn
On Fri, 23 Oct 2009, Arne Jansen wrote: 3) Do you have any configuration hints for setting up a pool layout which might help resilver performance? (aside from using hardware RAID instead of RAIDZ) Using fewer drives per vdev should certainly speed up resilver performance. It sounds you pool

Re: [zfs-discuss] bewailing of the n00b

2009-10-23 Thread Matthias Appel
I consider myself also as a "noob" when it gets to ZFS but I already built myself a ZFS filer and maybe I can enlighten you by sharing my "advanced noob who read about a lot about ZFS" thoughts about ZFS > A few examples of "duh" ?s >- How can I effect OCE with ZFS? The traditional 'back up a

Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-10-23 Thread Kyle McDonald
Mike Bo wrote: Once data resides within a pool, there should be an efficient method of moving it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove. Here's my scenario... When I originally created a 3TB pool, I didn't know the best way carve up the space, so I used a single

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Kyle McDonald
Bob Friesenhahn wrote: On Fri, 23 Oct 2009, Anand Mitra wrote: One of the biggest questions around this effort would be “licensing”. As far as our understanding goes; CDDL doesn’t restrict us from modifying ZFS code and releasing it. However GPL and CDDL code cannot be mixed, which implies that

Re: [zfs-discuss] bewailing of the n00b

2009-10-23 Thread David Dyer-Bennet
On Fri, October 23, 2009 09:57, Robert wrote: > A few months ago I happened upon ZFS and have been excitedly trying to > learn all I can about it. There is much to admire about ZFS, so I would > like to integrate it into my solution. The simple statement of > requirements is: support for total of

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Bob Friesenhahn
On Fri, 23 Oct 2009, Kyle McDonald wrote: Along these lines, it's always struck me that most of the restrictions of the GPL fall on the entity who distrbutes the 'work' in question. A careful reading of GPLv2 shows that restrictions only apply when distributing binaries. I would thinkthat

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread David Dyer-Bennet
On Fri, October 23, 2009 11:57, Kyle McDonald wrote: > > Along these lines, it's always struck me that most of the restrictions > of the GPL fall on the entity who distrbutes the 'work' in question. > > I would thinkthat distributing the source to a separate original work > for a module, leaves t

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Kyle McDonald
Bob Friesenhahn wrote: On Fri, 23 Oct 2009, Kyle McDonald wrote: Along these lines, it's always struck me that most of the restrictions of the GPL fall on the entity who distrbutes the 'work' in question. A careful reading of GPLv2 shows that restrictions only apply when distributing binar

[zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread sean walmsley
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives: fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 TIME UUID

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Jeremy f
What bug# is this under? I'm having what I believe is the same problem. Is it possible to just take the mpt driver from a prior build in the time being? The below is from the load the zpool scrub creates. This is on a dell t7400 workstation with a 1068E oemed lsi. I updated the firmware to the newe

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Jeremy f
Sorry, running snv_123, indiana On Fri, Oct 23, 2009 at 11:16 AM, Jeremy f wrote: > What bug# is this under? I'm having what I believe is the same problem. Is > it possible to just take the mpt driver from a prior build in the time > being? > The below is from the load the zpool scrub creates. T

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Joerg Schilling
Kyle McDonald wrote: > Arguably that line might even be shifted from the act of compiling it, > to the act of actually loading (linking) it into the Kernel, so that > distributing a compiled module might even work the same way. I'm not so > sure about this though. Presumably compiling it befor

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Joerg Schilling
Bob Friesenhahn wrote: > On Fri, 23 Oct 2009, Kyle McDonald wrote: > > > > Along these lines, it's always struck me that most of the restrictions of > > the > > GPL fall on the entity who distrbutes the 'work' in question. > > A careful reading of GPLv2 shows that restrictions only apply when

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Joerg Schilling
"David Dyer-Bennet" wrote: > The problem with this, I think, is that to be used by any significant > number of users, the module has to be included in a distribution, not just > distributed by itself. (And the different distributions have their own > policies on what they will and won't consider

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Cindy Swearingen
Hi Sean, A better way probably exists but I use the fdump -eV to identify the pool and the device information (vdev_path) that is listed like this: # fmdump -eV | more . . . pool = test pool_guid = 0x6de45047d7bde91d pool_context = 0 pool_failmode = wait

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
Just submitted the bug yesterday, under advice of James, so I don't have a number you can refer to you...the "change request" number is 6894775 if that helps or is directly related to the future bugid. >From what I seen/read this problem has been around for awhile but only rears >its ugly head

Re: [zfs-discuss] Zpool issue question

2009-10-23 Thread Cindy Swearingen
Hi Karim, All ZFS storage pools are going to use some amount of space for metadata and in this example it looks like 3 GB. This is what the difference between zpool list and zfs list is telling you. No other way exists to calculate the space that is consumed by metadata. pool space (199 GB) minu

[zfs-discuss] new google group for ZFS on OSX

2009-10-23 Thread Richard Elling
FYI, The ZFS project on MacOS forge (zfs.macosforge.org) has provided the following announcement: ZFS Project Shutdown2009-10-23 The ZFS project has been discontinued. The mailing list and repository will also be removed shortly. The community is migrating t

Re: [zfs-discuss] new google group for ZFS on OSX

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 2:38 PM, Richard Elling wrote: > FYI, > The ZFS project on MacOS forge (zfs.macosforge.org) has provided the > following announcement: > >ZFS Project Shutdown2009-10-23 >The ZFS project has been discontinued. The mailing list and > repository wil

Re: [zfs-discuss] new google group for ZFS on OSX

2009-10-23 Thread Richard Elling
On Oct 23, 2009, at 12:42 PM, Tim Cook wrote: On Fri, Oct 23, 2009 at 2:38 PM, Richard Elling > wrote: FYI, The ZFS project on MacOS forge (zfs.macosforge.org) has provided the following announcement: ZFS Project Shutdown2009-10-23 The ZFS project has been discontinued.

Re: [zfs-discuss] bewailing of the n00b

2009-10-23 Thread Travis Tabbal
> - How can I effect OCE with ZFS? The traditional > 'back up all the data somewhere, add a drive, > re-establish the file system/pools/whatever, then > copy the data back' is not going to work because > there will be nowhere to temporarily 'put' the > data. Add devices to the pool. Preferably in

Re: [zfs-discuss] zpool with very different sized vdevs?

2009-10-23 Thread Travis Tabbal
Hmm.. I expected people to jump on me yelling that it's a bad idea. :) How about this, can I remove a vdev from a pool if the pool still has enough space to hold the data? So could I add it in and mess with it for a while without losing anything? I would expect the system to resliver the data o

Re: [zfs-discuss] zpool with very different sized vdevs?

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 3:05 PM, Travis Tabbal wrote: > Hmm.. I expected people to jump on me yelling that it's a bad idea. :) > > How about this, can I remove a vdev from a pool if the pool still has > enough space to hold the data? So could I add it in and mess with it for a > while without los

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread James C. McPherson
Adam Cheal wrote: Just submitted the bug yesterday, under advice of James, so I don't have a number you can refer to you...the "change request" number is 6894775 if that helps or is directly related to the future bugid. From what I seen/read this problem has been around for awhile but only re

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of "hidden" problems found by Sun where the HBA resets, and due to market time pressure the "quick and dirty" solution was to spread the load over multiple HBA's instead of software fix? Just my 2 cents.. Bruno Adam Cheal wrote: J

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread sean walmsley
Thanks for this information. We have a weekly scrub schedule, but I ran another just to be sure :-) It completed with 0 errors. Running fmdump -eV gives: TIME CLASS fmdump: /var/fm/fmd/errlog is empty Dumping the faultlog (no -e) does give some output, but again there

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Bruno Sousa
Hi Cindy, Thank you for the update, mas it seems like i can't see any information specific to that bug. I can only see bugs number 6702538 and 6615564, but according to their history, they have been fixed quite some time ago. Can you by any chance present the information about bug 6694909 ? T

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Richard Elling
On Oct 23, 2009, at 1:48 PM, Bruno Sousa wrote: Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of "hidden" problems found by Sun where the HBA resets, and due to market time pressure the "quick and dirty" solution was to spread the load over multiple HBA's instead of software fix

Re: [zfs-discuss] cannot import 'rpool': one or more devices is currently unavailable

2009-10-23 Thread Victor Latushkin
Tommy McNeely wrote: I have a system who's rpool has gone defunct. The rpool is made of a single "disk" which is a raid5EE made of all 8 146G disks on the box. The raid card is the Adaptec brand card. It was running nv_107, but its currently net booted to nv_121. I have already checked in the

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-23 Thread Karl Rossing
Is there a CR yet for this? Thanks Karl Cindy Swearingen wrote: Hi everyone, Currently, the device naming changes in build 125 mean that you cannot use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a mirrored root pool. If you are considering this release for the ZFS log dev

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Cindy Swearingen
I'm stumped too. Someone with more FM* experience needs to comment. Cindy On 10/23/09 14:52, sean walmsley wrote: Thanks for this information. We have a weekly scrub schedule, but I ran another just to be sure :-) It completed with 0 errors. Running fmdump -eV gives: TIME

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Eric Schrock
On 10/23/09 15:05, Cindy Swearingen wrote: I'm stumped too. Someone with more FM* experience needs to comment. Looks like your errlog may have been rotated out of existence - see if there is a .X or .gz version in /var/fm/fmd/errlog*. The list.suspect fault should be including a location fie

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-23 Thread Chris Du
Sorry, do you mean luupgrade from previous versions or from 125 to future versions? I luupgrade from 124 to 125 with mirrored root pool and everything is working fine. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discus

[zfs-discuss] PSARC 2009/571: ZFS deduplication properties

2009-10-23 Thread Craig S. Bell
I haven't seen any mention of it in this forum yet, so FWIW you might be interested in the details of ZFS deduplication mentioned in this recently-filed case. Case log: http://arc.opensolaris.org/caselog/PSARC/2009/571/ Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115507 V

[zfs-discuss] Apple shuts down open source ZFS project

2009-10-23 Thread Craig S. Bell
Sad to hear that Apple is apparently going in another direction. http://www.macrumors.com/2009/10/23/apple-shuts-down-open-source-zfs-project/ -cheers, CSB -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Richard Elling
On Oct 23, 2009, at 3:19 PM, Eric Schrock wrote: On 10/23/09 15:05, Cindy Swearingen wrote: I'm stumped too. Someone with more FM* experience needs to comment. Looks like your errlog may have been rotated out of existence - see if there is a .X or .gz version in /var/fm/fmd/errlog*. The

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-23 Thread Cindy Swearingen
Probably if you try to use any LU operation after you have upgraded to build 125. cs On 10/23/09 16:18, Chris Du wrote: Sorry, do you mean luupgrade from previous versions or from 125 to future versions? I luupgrade from 124 to 125 with mirrored root pool and everything is working fine. ___

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 3:48 PM, Bruno Sousa wrote: > Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of "hidden" > problems found by Sun where the HBA resets, and due to market time pressure > the "quick and dirty" solution was to spread the load over multiple HBA's > instead of softw

Re: [zfs-discuss] PSARC 2009/571: ZFS deduplication properties

2009-10-23 Thread BJ Quinn
Anyone know if this means that this will actually show up in SNV soon, or whether it will make 2010.02? (on disk dedup specifically) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.openso

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-23 Thread Kurt Schreiner
Hi, On Mon, Oct 19, 2009 at 05:03:18PM -0600, Cindy Swearingen wrote: > Currently, the device naming changes in build 125 mean that you cannot > use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a > mirrored root pool. > [...] Just ran into this yesterday... The change to get thin

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
I don't think there was any intention on Sun's part to ignore the problem...obviously their target market wants a performance-oriented box and the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY channels = 1 channel per drive = no contention for channels. The x4540 is a monste

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 6:32 PM, Adam Cheal wrote: > I don't think there was any intention on Sun's part to ignore the > problem...obviously their target market wants a performance-oriented box and > the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY > channels = 1 channel per

[zfs-discuss] Checksums

2009-10-23 Thread Tim Cook
So, from what I gather, even though the documentation appears to state otherwise, default checksums have been changed to SHA256. Making that assumption, I have two questions. First, is the default updated from fletcher2 to SHA256 automatically for a pool that was created with an older version of

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread sean walmsley
Eric and Richard - thanks for your responses. I tried both: echo ::spa -c | mcb zdb -C (not much of a man page for this one!) and was able to match the POOL id from the log (hex 4fcdc2c9d60a5810) with both outputs. As Richard pointed out, I needed to convert the hex value to decimal to get

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Eric Schrock
On 10/23/09 16:56, sean walmsley wrote: Eric and Richard - thanks for your responses. I tried both: echo ::spa -c | mcb zdb -C (not much of a man page for this one!) and was able to match the POOL id from the log (hex 4fcdc2c9d60a5810) with both outputs. As Richard pointed out, I needed t

[zfs-discuss] Change physical path to a zpool.

2009-10-23 Thread Jon Aimone
Hi, I have a functional OpenSolaris x64 system on which I need to physically move the boot disk, meaning its physical device path will change and probably its cXdX name. When I do this the system fails to boot. The error messages indicate that it's still trying to read from the original path. I

Re: [zfs-discuss] Change physical path to a zpool.

2009-10-23 Thread Jon Aimone
Hi, Check that... I'm on the alias now... Jon Aimone spake thusly, on or about 10/23/09 17:15: Hi, I have a functional OpenSolaris x64 system on which I need to physically move the boot disk, meaning its physical device path will change and probably its cXdX name. When I do this the system fa

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
LSI's sales literature on that card specs "128 devices" which I take with a few hearty grains of salt. I agree that with all 46 drives pumping out streamed data, the controller would be overworked BUT the drives will only deliver data as fast as the OS tells them to. Just because the speedometer

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Richard Elling
On Oct 23, 2009, at 4:46 PM, Tim Cook wrote: On Fri, Oct 23, 2009 at 6:32 PM, Adam Cheal wrote: I don't think there was any intention on Sun's part to ignore the problem...obviously their target market wants a performance-oriented box and the x4540 delivers that. Each 1068E controller chip

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-23 Thread Chris Du
What luupgrade do you use? I uninstall lu package in current build first, then install new lu package in the verion to upgrade. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 7:17 PM, Adam Cheal wrote: > LSI's sales literature on that card specs "128 devices" which I take with a > few hearty grains of salt. I agree that with all 46 drives pumping out > streamed data, the controller would be overworked BUT the drives will only > deliver data as

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 7:17 PM, Richard Elling wrote: > > Tim has a valid point. By default, ZFS will queue 35 commands per disk. > For 46 disks that is 1,610 concurrent I/Os. Historically, it has proven to > be > relatively easy to crater performance or cause problems with very, very, > very ex

Re: [zfs-discuss] Checksums

2009-10-23 Thread Adam Leventhal
On Fri, Oct 23, 2009 at 06:55:41PM -0500, Tim Cook wrote: > So, from what I gather, even though the documentation appears to state > otherwise, default checksums have been changed to SHA256. Making that > assumption, I have two questions. That's false. The default checksum has changed from fletch

Re: [zfs-discuss] Checksums

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 7:19 PM, Adam Leventhal wrote: > On Fri, Oct 23, 2009 at 06:55:41PM -0500, Tim Cook wrote: > > So, from what I gather, even though the documentation appears to state > > otherwise, default checksums have been changed to SHA256. Making that > > assumption, I have two quest

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-23 Thread Eric D. Mudama
On Tue, Oct 20 at 21:54, Bob Friesenhahn wrote: On Tue, 20 Oct 2009, Richard Elling wrote: Intel: X-25E read latency 75 microseconds ... but they don't say where it was measured or how big it was... Probably measured using a logic analyzer and measuring the time from the last bit of the r

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
And therein lies the issue. The excessive load that causes the IO issues is almost always generated locally from a scrub or a local recursive "ls" used to warm up the SSD-based zpool cache with metadata. The regular network IO to the box is minimal and is very read-centric; once we load the box

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Richard Elling
On Oct 23, 2009, at 5:32 PM, Tim Cook wrote: On Fri, Oct 23, 2009 at 7:17 PM, Richard Elling > wrote: Tim has a valid point. By default, ZFS will queue 35 commands per disk. For 46 disks that is 1,610 concurrent I/Os. Historically, it has proven to be relatively easy to crater performance o

Re: [zfs-discuss] ZFS port to Linux

2009-10-23 Thread Anurag Agarwal
Hi Joerg, Thanks for this clarification. We understand that we can distribute ZFS binary under a non GPL license, as long as it does not use GPL symbols. Our plan regarding ZFS is to first port it to Linux kernel and then make its binary distributions available for various different distributions

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
Here is example of the pool config we use: # zpool status pool: pool002 state: ONLINE scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52 2009 config: NAME STATE READ WRITE CKSUM pool002 ONLINE 0 0 0 raidz2 ONLINE

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-23 Thread Eric D. Mudama
On Tue, Oct 20 at 22:24, Frédéric VANNIERE wrote: You can't use the Intel X25-E because it has a 32 or 64 MB volatile cache that can't be disabled neither flushed by ZFS. I don't believe the above statement is correct. According to anandtech who asked Intel: http://www.anandtech.com/cpuchips

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Richard Elling
ok, see below... On Oct 23, 2009, at 8:14 PM, Adam Cheal wrote: Here is example of the pool config we use: # zpool status pool: pool002 state: ONLINE scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52 2009 config: NAME STATE READ WRITE CKSUM poo