Re: [zfs-discuss] CR 6880994 and pkg fix
On Tue, Mar 23, 2010 at 07:22:59PM -0400, Frank Middleton wrote: > On 03/22/10 11:50 PM, Richard Elling wrote: > >> Look again, the checksums are different. > > Whoops, you are correct, as usual. Just 6 bits out of 256 different... > > Look which bits are different - digits 24, 53-56 in both cases. This is very likely an error introduced during the calculation of the hash, rather than an error in the input data. I don't know how that helps narrow down the source of the problem, though.. It suggests an experiment: try switching to another hash algorithm. It may move the problem around, or even make it worse, of course. I'm also reminded of a thread about the implementation of fletcher2 being flawed, perhaps you're better switching regardless. >>> o Why is the file flagged by ZFS as fatally corrupted still accessible? > > This is the part I was hoping to get answers for since AFAIK this > should be impossible. Since none of this is having any operational > impact, all of these issues are of interest only, but this is a bit scary! It's only the blocks with bad checksums that should return errors. Maybe you're not reading those, or the transient error doesn't happen next time when you actually try to read it / from the other side of the mirror. Repeated errors in the same file could also be a symptom of an error calculating the hash when the file was written. If there's a bit-flipping issue at the root of it, with some given probability, that would invert the probabilities of "correct" and "error" results. -- Dan. pgpGRgBlRkr4l.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] snapshots as versioning tool
On Tue, Mar 23, 2010 at 11:09 PM, Harry Putnam wrote: > Matt Cowger writes: > >> zfs list | grep '@' >> >> zpool/f...@1154758 324G - 461G - >> zpool/f...@1208482 6.94G - 338G - >> zpool/f...@daily.netbackup 1.07G - 344G - >> zpool/f...@1154758 1.77G - 242G - >> zpool/f...@1208482 2.26G - 261G - >> zpool/f...@daily.netbackup 323M - 266G - >> >> First column there shows the size of the snapshot (e.g. how much has >> changed). > > I'm clearly missing something here. Is that a typo? (your command > line) > > I can't get results like that without `zfs list -t snapshot' It depends on listsnapshots _pool_ property being on. (It is off by default) hellride:~$ zpool get listsnapshots rpool NAME PROPERTY VALUE SOURCE rpool listsnapshots on local -- Regards, Cyril ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] CR 6880994 and pkg fix
On 03/22/10 11:50 PM, Richard Elling wrote: Look again, the checksums are different. Whoops, you are correct, as usual. Just 6 bits out of 256 different... Last year expected 4a027c11b3ba4cec bf274565d5615b7b 3ef5fe61b2ed672e ec8692f7fd33094a actual 4a027c11b3ba4cec bf274567d5615b7b 3ef5fe61b2ed672e ec86a5b3fd33094a Last Month (obviously a different file) expected 4b454eec8aebddb5 3b74c5235e1963ee c4489bdb2b475e76 fda3474dd1b6b63f actual 4b454eec8aebddb5 3b74c5255e1963ee c4489bdb2b475e76 fda354c1d1b6b63f Look which bits are different - digits 24, 53-56 in both cases. But comparing the bits, there's no discernible pattern. Is this an artifact of the algorithm made by one erring bit always being at the same offset? don't forget the -V flag :-) I didn't. As mentioned there are subsequent set-bit errors, (14 minutes later) but none for this particular incident. I'll send you the results separately since they are so puzzling. These 16 checksum failures on libdlpi.so.1 were the only fmdump -eV entries for the entire boot sequence except that it started out with one ereport.fs.zfs.data, whatever that is, for a total of exactly 17 records, 9 in 1 uS, then 8 more 40 mS later, also in 1uS. Then nothing for 4 minutes, one more checksum failure ("bad_range_sets =") then 10 minutes later, two with the set-bits error, one for each disk. That's it. o Why is the file flagged by ZFS as fatally corrupted still accessible? This is the part I was hoping to get answers for since AFAIK this should be impossible. Since none of this is having any operational impact, all of these issues are of interest only, but this is a bit scary! Broken CPU, HBA, bus, memory, or power supply. No argument there. Doesn't leave much, does it :-). Since the file itself appears to be uncorrupted, and the metadata is consistent for all 16 entries, it would seem that the checksum calculation itself is failing because it would appear in this case that everything else is OK. Is there a way to apply the fletcher2 algorithm interactively as in sum(1) or cksum(1) (i.e., outside the scope of ZFS) to see if it is in some way pattern sensitive with this CPU? Since only a small subset of files is affected, this should be easy to verify. Start a scrub to heat things up and then in parallel do checksums in a tight loop... Transient failures are some of the most difficult to track down. Not all transient failures are random. Indeed, although this doesn't seem to be random. The hits to libdlpi.so.1 seems to be quite reproducible as you've seen from the fmdump log, although I doubt this particular scenario will happen again. Can you think of any tools to investigate this? I suppose I could extract the checksum code from ZFS itself to build one, but that would take quite a lot of time. Is there any documentation that explains the output of fmdump -eV? What are set-bits, for example? I guess not... from man fmdump(1m) The error log file contains /Private/ telemetry informa- tion used by Sun's automated diagnosis software. .. Each problem recorded in the fault log is identified by: oThe time of its diagnosis So did ZFS really read 8 copies of libdlpi.so.1 within 1uS, wait 40mS and then read another 8 copies in 1uS again? I doubt it :-). I bet it took > 1uS just to (mis)calculate the checksum (1.6GHz 16 bit cpu). Thanks -- Frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] snapshots as versioning tool
I'm running s10u8, not opensolaris, so I could be a bit behind. --M -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bryan Allen Sent: Tuesday, March 23, 2010 2:14 PM To: Harry Putnam Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] snapshots as versioning tool +-- | On 2010-03-23 16:09:05, Harry Putnam wrote: | | Date: Tue, 23 Mar 2010 16:09:05 -0500 | From: Harry Putnam | To: zfs-discuss@opensolaris.org | Subject: Re: [zfs-discuss] snapshots as versioning tool | | Matt Cowger writes: | | > zfs list | grep '@' | > | > zpool/f...@1154758324G - 461G - | > zpool/f...@1208482 6.94G - 338G - | > zpool/f...@daily.netbackup 1.07G - 344G - | > zpool/f...@11547581.77G - 242G - | > zpool/f...@12084822.26G - 261G - | > zpool/f...@daily.netbackup 323M - 266G - | > | > First column there shows the size of the snapshot (e.g. how much has changed). | | I'm clearly missing something here. Is that a typo? (your command | line) | | I can't get results like that without `zfs list -t snapshot' The syntax for `list` changed at some point, to not list everything by default. Use `-t all` or `-t snapshot`. Presumably Matt is using an older version, or an alias? -- bda cyberpunk is dead. long live cyberpunk. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] snapshots as versioning tool
+-- | On 2010-03-23 16:09:05, Harry Putnam wrote: | | Date: Tue, 23 Mar 2010 16:09:05 -0500 | From: Harry Putnam | To: zfs-discuss@opensolaris.org | Subject: Re: [zfs-discuss] snapshots as versioning tool | | Matt Cowger writes: | | > zfs list | grep '@' | > | > zpool/f...@1154758324G - 461G - | > zpool/f...@1208482 6.94G - 338G - | > zpool/f...@daily.netbackup 1.07G - 344G - | > zpool/f...@11547581.77G - 242G - | > zpool/f...@12084822.26G - 261G - | > zpool/f...@daily.netbackup 323M - 266G - | > | > First column there shows the size of the snapshot (e.g. how much has changed). | | I'm clearly missing something here. Is that a typo? (your command | line) | | I can't get results like that without `zfs list -t snapshot' The syntax for `list` changed at some point, to not list everything by default. Use `-t all` or `-t snapshot`. Presumably Matt is using an older version, or an alias? -- bda cyberpunk is dead. long live cyberpunk. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] snapshots as versioning tool
Matt Cowger writes: > zfs list | grep '@' > > zpool/f...@1154758324G - 461G - > zpool/f...@1208482 6.94G - 338G - > zpool/f...@daily.netbackup 1.07G - 344G - > zpool/f...@11547581.77G - 242G - > zpool/f...@12084822.26G - 261G - > zpool/f...@daily.netbackup 323M - 266G - > > First column there shows the size of the snapshot (e.g. how much has changed). I'm clearly missing something here. Is that a typo? (your command line) I can't get results like that without `zfs list -t snapshot' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Dedup Performance
http://www.bitshop.com/Blogs/tabid/95/EntryId/78/Bug-in-OpenSolaris-SMB-Server-causes-slow-disk-i-o-always.aspx This explains just how major of a bug this issue is IMHO - The SMB slowdown from Windows 2003 is doing something odd in the Kernel I think now from the symptoms - See the tests for rsync performance. Our file move used to bring the server to almost unusable (in fact some SAN clients would say iSCSI host disappeared and shutdown). Now during the copy / load on the disks the iSCSI clients are insanely fast - Only difference is server/smb is disabled. I think ZFS De-Dup just made it appear worse. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?
On Mar 23, 2010, at 9:05 AM, Richard Jahnel wrote: > Not quite brave enough to put dedup into prodiction here. > > Concerned about the issues some folks have had when releasing large numbers > of blocks in one go. The send/receive dedup is independent of the pool dedup. You do not have to dedup the pool to benefit from send/receive dedup. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] pool use from network poor performance
Hi Here is more specs. MB: K8N4-E SE - AMD Socket 754 CPU - NVIDIA® nForce™ 4 4X - PCI Express Architecture - Gigabit LAN - 4 SATA RAID Ports - 10 USB2.0 Ports http://www.asus.com/product.aspx?P_ID=TBx7PakpparxrK89&templete=2 Now situation is this : with ftp : i can upload to datapool with speed ~45MB/s download from datapool only with speed ~ 750 KB/s So it is now read performance that is a problem. Could it really be that nvidia network and sata drivers now share same IRQ and that's why performance is slow. Mar 23 19:35:01 unix: [ID 954099 kern.info] NOTICE: IRQ20 is being shared by drivers with different interrupt levels. This is just odd as this issue come when only changed pool physical disks and also changed raidz to raidz2 and also update to build 134 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Moving drives around...
On Tue, March 23, 2010 12:00, Ray Van Dolson wrote: > Kind of a newbie question here -- or I haven't been able to find great > search terms for this... > > Does ZFS recognize zpool members based on drive serial number or some > other unique, drive-associated ID? Or is it based off the drive's > location (c0t0d0, etc). > > I'm wondering because I have a zpool set up across a bunch of drives > and I am planning to move those drives to another port on the > controller potentially changing their location -- as well as the > location of my "boot" zpool (two disks). > > Will ZFS detect this and be smart about it or do I need to do something > like a zfs export ahead of time? What about for the root pool? ZFS recognizes disks based on various ZFS special blocks written to them. It also keeps a cache file on where things have been lately. If you export a ZFS pool, swap the physical drives around, and import it, everything should be fine. If you don't export first, you may have to give it a bit of help. And there are pathological cases where for example you don't have a link in the /dev/dsk directory which can cause a default import to not find all the pieces of a pool. -- David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Moving drives around...
On Tue, Mar 23, 2010 at 2:00 PM, Ray Van Dolson wrote: > Kind of a newbie question here -- or I haven't been able to find great > search terms for this... > > Does ZFS recognize zpool members based on drive serial number or some > other unique, drive-associated ID? Or is it based off the drive's > location (c0t0d0, etc). > ZFS makes uses of labels and will detect your drives even if you move them around. You can check that with 'zdb -l /dev/rdsk/cXtXdXs0' > > I'm wondering because I have a zpool set up across a bunch of drives > and I am planning to move those drives to another port on the > controller potentially changing their location -- as well as the > location of my "boot" zpool (two disks). > > Will ZFS detect this and be smart about it or do I need to do something > like a zfs export ahead of time? What about for the root pool? > No need. Same goes for the rpool, you only need to make sure your system will boot from the correct disk. -- Giovanni ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Question: zfs set userquota not working on existing datasets
Greetings all, I recently applied all patches and upgraded my zpool to version 15 and zfs to version 4 so I could start using the zfs userquota feature. What I've found though is that I can only apply them on new datasets, not on the existing datasets. Here is an example: # zfs list NAME USED AVAIL REFER MOUNTPOINT data 50.7G 145G 50.7G /data # zpool upgrade This system is currently running ZFS pool version 15. All pools are formatted using this version. # zfs upgrade This system is currently running ZFS filesystem version 4. All filesystems are formatted with the current version. # zfs set userqu...@user1=50g data # zfs get userqu...@user1 data NAME PROPERTY VALUE SOURCE data userqu...@user1- - # zfs create data/test # zfs set userqu...@user1=50g data/test # zfs get userqu...@user1 data/test NAME PROPERTY VALUE SOURCE data/test userqu...@user1 50Glocal Anyone have an idea of a fix, please? Or is this a known limitation? Many thanks, Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Moving drives around...
Kind of a newbie question here -- or I haven't been able to find great search terms for this... Does ZFS recognize zpool members based on drive serial number or some other unique, drive-associated ID? Or is it based off the drive's location (c0t0d0, etc). I'm wondering because I have a zpool set up across a bunch of drives and I am planning to move those drives to another port on the controller potentially changing their location -- as well as the location of my "boot" zpool (two disks). Will ZFS detect this and be smart about it or do I need to do something like a zfs export ahead of time? What about for the root pool? Thanks, Ray ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS on Advanced format (4kb sector) drives
Is there any action required to make ZFS properly align itself when using advanced format drives such as the newer WD Green drives? I prefer to use them by dedicating the whole disk to zfs, rather than using slices, although I would assume if I used a slice I could manually align it. Thanks much in advance guys! - f -- The sender of this email subscribes to Perimeter E-Security's email anti-virus service. This email has been scanned for malicious code and is believed to be virus free. For more information on email security please visit: http://www.perimeterusa.com/services/messaging This communication is confidential, intended only for the named recipient(s) above and may contain trade secrets or other information that is exempt from disclosure under applicable law. Any use, dissemination, distribution or copying of this communication by anyone other than the named recipient(s) is strictly prohibited. If you have received this communication in error, please delete the email and immediately notify our Command Center at 203-541-3444. Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?
Not quite brave enough to put dedup into prodiction here. Concerned about the issues some folks have had when releasing large numbers of blocks in one go. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] pool use from network poor performance
what does prstat show? We had a lot of trouble here using iscsi and zvols due to the cpu capping out with speeds less than 20mb/sec. After simply switching to Qlogic fibre HBAs and a file backed lu we went to 160mb/sec on that same test platform. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris
Wow, they actually did the right thing in the end. This is fantastic. I'm all too happy to eat as much crow as you have to offer. I wonder when (if?) they'll bring back the ability to purchase OpenSolaris subscriptions online.. I'm actually so happy right now that I even appreciate Tim's clueless would-be cynicisms :) On Tue, Mar 23, 2010 at 9:48 AM, Tim Cook wrote: > > > On Tue, Mar 23, 2010 at 7:11 AM, Jacob Ritorto > wrote: >> >> Sorry to beat the dead horse, but I've just found perhaps the only >> written proof that OpenSolaris is supportable. For those of you who >> deny that this is an issue, its existence as a supported OS has been >> recently erased from every other place I've seen on the Oracle sites. >> Everyone please grab a copy of this before they silently delete it and >> claim that it never existed. I'm buying a contract right now. I may >> just take back every mean thing I ever said about Oracle. >> >> http://www.sun.com/servicelist/ss/lgscaledcsupprt-us-eng-20091001.pdf >> > > > Erased from every site? Assuming when I pointed out several links the > first go round wasn't enough, how bout directly on the opensolaris page > itself? > http://www.opensolaris.com/learn/features/availability/ > • Highly available open source based solutions ready to deploy on > OpenSolaris with full production support from Sun. > OpenSolaris enables developers to develop, debug, and globally deploy > applications faster, with built-in innovative features and with full > production support from Sun. > > Full production level support > > Both Standard and Premium support offerings are available for deployment of > Open HA Cluster 2009.06 with OpenSolaris 2009.06 with following > configurations: > > etc. etc. etc. > So do you get paid directly by IBM then, or is it more of a "consultant" > type role? > --Tim > > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris
On Tue, Mar 23, 2010 at 7:11 AM, Jacob Ritorto wrote: > Sorry to beat the dead horse, but I've just found perhaps the only > written proof that OpenSolaris is supportable. For those of you who > deny that this is an issue, its existence as a supported OS has been > recently erased from every other place I've seen on the Oracle sites. > Everyone please grab a copy of this before they silently delete it and > claim that it never existed. I'm buying a contract right now. I may > just take back every mean thing I ever said about Oracle. > > http://www.sun.com/servicelist/ss/lgscaledcsupprt-us-eng-20091001.pdf > > Erased from every site? Assuming when I pointed out several links the first go round wasn't enough, how bout directly on the opensolaris page itself? http://www.opensolaris.com/learn/features/availability/ • Highly available open source based solutions ready to deploy on OpenSolaris with *full production support from Sun. * OpenSolaris enables developers to develop, debug, and globally deploy applications faster, with built-in innovative features and with *full production support from Sun.* * * *Full production level support Both Standard and Premium support offerings are available for deployment of Open HA Cluster 2009.06 with OpenSolaris 2009.06 with following configurations: * etc. etc. etc. So do you get paid directly by IBM then, or is it more of a "consultant" type role? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris
Sorry to beat the dead horse, but I've just found perhaps the only written proof that OpenSolaris is supportable. For those of you who deny that this is an issue, its existence as a supported OS has been recently erased from every other place I've seen on the Oracle sites. Everyone please grab a copy of this before they silently delete it and claim that it never existed. I'm buying a contract right now. I may just take back every mean thing I ever said about Oracle. http://www.sun.com/servicelist/ss/lgscaledcsupprt-us-eng-20091001.pdf On Mon, Mar 1, 2010 at 10:23 PM, Erik Trimble wrote: > On Mon, 2010-03-01 at 20:52 -0500, Thomas Burgess wrote: >> "There may be some things we choose not to open source going forward, >> similar to how MySQL manages certain value-add[s] at the top of the >> stack," Roberts said. "It's important to understand the plan now is to >> deliver value again out of our IP investment, while at the same time >> measuring that with continuing to deliver OpenSolaris in the open." >> >> "This will be a balancing act, one that we'll get right >> sometimes, but may not always." >> >> - >> From the feedback data I've seen customers dislike this type >> of licensing model most. Dan may or may not be reading this, >> but I'd strongly discourage this approach. Without knowing >> more I don't know what alternative I could recommend though.. >> (Too bad I missed that irc meeting..) >> >> ./C >> >> >> >> I may be wrong, but isn't this already what they do? I mean, there is >> a bunch of proprietary stuff in solaris that didn't make it into >> opensolaris. I thought this was how they did things anyways, or am i >> misunderstanding something. >> > > Not quite. The stuff that didn't make it from Solaris Nevada into > OpenSolaris was pretty much everything that /couldn't/ be open-sourced, > or was being EOL'd in any case. We didn't really hold anything back > there. > > The better analogy is what Tim Cook pointed out, which is the version of > OpenSolaris that runs on the 7000-series storage devices. There's some > stuff on there that isn't going to be putback into the OpenSolaris > repos. > > > I don't know, and I certainly can't speak for the project, but I suspect > the type of enhancements which won't make it out into the OpenSolaris > repos are indeed ones like we ship with the 7000-series hardware. That > is, I doubt that you will be able to get an "OpenSolaris with Oracle > Improvements" software distro/package - the proprietary stuff will only > be used as part of a package bundle, since Oracle is big on > one-stop-integrated-solution things. > > > -- > Erik Trimble > Java System Support > Mailstop: usca22-123 > Phone: x17195 > Santa Clara, CA > Timezone: US/Pacific (GMT-0800) > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] pool use from network poor performance
Hi Well what is changed in system. replaced 4 sata disks with new and bigger disks. same time recreated raidz to raidz2 updated OS from b132 to 134 It used to work with old setup. Has there been some driver changes. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Intel SASUC8I - worth every penny
Heh. The original definition of "I" was inexpensive. Was never meant to be "independent". Guess that changed by vendors. The idea all along was to take inexpensive hardware and use software to turn it into a reliable system. http://portal.acm.org/citation.cfm?id=50214 http://www.cs.cmu.edu/~garth/RAIDpaper/Patterson88.pdf Regarding the 2.5" laptop drives, do the inherent error detection properties >> of ZFS subdue any concerns over a laptop drive's higher bit error rate or >> rated MTBF? I've been reading about OpenSolaris and ZFS for several months >> now and am incredibly intrigued, but have yet to implement the solution in >> my lab. >> > > Well ... the price difference means you can have mirrors of the laptop > drives and still save money compared to the "enterprise" ones. With a modern > patrol-reading (scrub or hardware raid) array-setup, and with some > redundancy, you can re-implement "I" to mean "inexpensive" not "independent" > in RAID. ;) > > > //Svein > > -- > -- "You can choose your friends, you can choose the deals." - Equity Private "If Linux is faster, it's a Solaris bug." - Phil Harman Blog - http://whatderass.blogspot.com/ Twitter - @khyron4eva ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] pool use from network poor performance
On Mon, Mar 22, 2010 at 10:58:05PM -0700, homerun wrote: > if i access to datapool from network , smb , nfs , ftp , sftp , jne... > i get only max 200 KB/s speeds > compared to rpool that give XX MB/S speeds to and from network it is slow. > > Any ideas what reasons might be and how try to find reason. Maybe a shared interrupt between the sata controller and the network card, with devices or drivers that don't play well with others. -- Dan. pgp4pyYM9TuDv.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss