Re: [zfs-discuss] replace same sized disk fails with too small error
you mentioned one, so what do you recomend as a workaround?. I've tried re-initialing the disks on another system's HW RAID controller, but still get the same error. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] replace same sized disk fails with too small error
The user DEFINITELY isn't expecting 5 bytes, or what you meant to say 5000 bytes, they're expecting 500GB. You know, 536,870,912,000 bytes. But even if the drive mfg's calculated it correctly, they wouldn't even be getting that due to filesystem overhead. Then you have a very stupid user who is been living in a cave. The only reason why we incorrect label memory is because the systems are binary. (Incorrect, because there's one standard and it says that K, M, G and T are powers of 10.) The computer cannot efficiently address non-binary sized memory. IIRC, some stupid user did indeed sue WD and he won, but that is in America (I'm sure that the km is 1024 meters in the US) Since that lawsuit the vendors all make sure that the specification says how many addressable sectors are in a disk. You make the right size disk a big issue. And perhaps it is, however, ZFS is out a number of years and noone complained about it before. It's not just a big priority, it's not even in the list. File a bug/rfe, if you want this fixed. Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS GUI
Is there a GUI for 2008.11 for ZFS like in S10U6? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] replace same sized disk fails with too small error
so you're suggesting I buy 750s to replace the 500s. then if a 750 fails buy another bigger drive again? Have you filed a bug/rfe to fix this in ZFS in future? Anyway, you only need to change the 750GB drives if: - all 500GBs drives are replace by 750GB disks - and they're all bigger than the newest 750GB the drives are RMA replacements for the other disks that faulted in the array before. they are the same brand, model and model number, apparently not so under the label though, but no way I could tell that before. That is really weird. Or is this, perhaps, because you use a EFI label on the disks and we now label the disks different? (I think we make sure that the ZFS label starts at a 128K offset, now, before it did not) Casper ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]
Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Nicolas Williams wrote: I'd recommend waiting for ZFS crypto rather than using lofi with ZFS. Wait... for how long?. Any schedule?. It is now and always has been on the project page at: http://opensolaris.org/os/project/zfs-crypto/ Yes we have slipped twice now but this is a very complex feature and we have to get it correct first time and make sure there are no bad interactions with existing ZFS functionality that has already integrated or is also indevelopment. I am very interested in ZFS Crypto, although I have lost hope of seeing in Solaris 10. That hope should never have been there, since I've always said we have no intention to backport that feature to Solaris 10. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?
On Tue, 20 Jan 2009, Orvar Korvar wrote: What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS? Yes and no. ZFS will detect corruptions that other filesystems won't notice. If your HW raid passes bad data, then ZFS will detect that but it won't be able to correct defective user data. If ZFS manages the redundancy, then ZFS can detect and correct the bad user data. With recent OpenSolaris there is also the option of setting copies=2 so that corrupted user data can be corrected as long as the ZFS pool itself continues functioning. Bob == Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] replace same sized disk fails with too small error
I believe this is an fdisk issue. But I don't think any of the fdisk engineers hang out on this forum. You might try partitioning the disk on another OS. -- richard Antonius wrote: I'll attach 2 files of output from 2 disks: c4d0 is a current member of the zpool that is a sibling (as in a member of the same batch a couple of serial number increments different) of the faulted disk to replace and currently running without issue and c3d0 is a new disk I got back from as a replacement for a failed disk that's obviously different. it appears the EFI label needs fixing. I just can't get it to stick with any combination of commands I've tried. eg removing and resetting all partitions with fdisk -e and trying to recreate with geometry as per the existing pool members even after trying to dd the first section of all partitions: bash-3.2# fdisk -A 238:0:0:1:0:254:63:1023:1:976773167 c3d0 fdisk: EFI partitions must encompass the entire disk (input numsect: 976773167 - avail: 976760063) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS GUI
Marius van Vuuren wrote: Is there a GUI for 2008.11 for ZFS like in S10U6? It doesn't appear to be in the dev repository, but you should be able to install the packages from S10u6. If you try this, please let us know if it works. Look for packages contributing to webconsole: SUNWasu Sun Java System Application Server (usr) SUNWemcon Spanish Sun Java(TM) Web Console 3.1 (Core) SUNWemctg Spanish Sun Java(TM) Web Console 3.1 (Tags Components) SUNWezfsg Spanish localization for Sun Web Console ZFS administration SUNWfmcon French Sun Java(TM) Web Console 3.1 (Core) SUNWfmctg French Sun Java(TM) Web Console 3.1 (Tags Components) SUNWfzfsg French localization for Sun Web Console ZFS administration SUNWmcon Sun Java(TM) Web Console 3.1 (Core) SUNWmconr Sun Java(TM) Web Console 3.1 (Root) SUNWmcos Implementation of Sun Java(TM) Web Console (3.1) services SUNWmcosx Implementation of Sun Java(TM) Web Console (3.1) services SUNWmctag Sun Java(TM) Web Console 3.1 (Tags Components) SUNWzfsgr ZFS Administration for Sun Java(TM) Web Console (Root) SUNWzfsgu ZFS Administration for Sun Java(TM) Web Console (Usr) -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] replace same sized disk fails with too small error
Grab the AOE driver and pull aoelabinit out of the package.They wrote it just for forcing EFI or Sun labels onto disks when the normal Solaris tools get in the way. coraid's website looks like it's broken at the moment, so you may need to find it elsewhere on the web. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs null pointer deref,
Anton B. Rang wrote: Sigh. Richard points out in private email that automatic savecore functionality is disabled in OpenSolaris; you need to manually set up a dump device and save core files if you want them. However, the stack may be sufficient to ID the bug. The dump device is there, you just need to copy the data from the dump device to a file system, using savecore. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] JZ spammer
JZ cease desist all this junk e-mail. I am adding you to my spam filters. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS GUI
Thanks I will give it a shot. Installing OS2008.11 on a 4150 and have 2 JBODS connected to it ... would like to give our client a nice interface even if its just for show. Will let you know if it works. Richard Elling wrote: Marius van Vuuren wrote: Is there a GUI for 2008.11 for ZFS like in S10U6? It doesn't appear to be in the dev repository, but you should be able to install the packages from S10u6. If you try this, please let us know if it works. Look for packages contributing to webconsole: SUNWasu Sun Java System Application Server (usr) SUNWemcon Spanish Sun Java(TM) Web Console 3.1 (Core) SUNWemctg Spanish Sun Java(TM) Web Console 3.1 (Tags Components) SUNWezfsg Spanish localization for Sun Web Console ZFS administration SUNWfmcon French Sun Java(TM) Web Console 3.1 (Core) SUNWfmctg French Sun Java(TM) Web Console 3.1 (Tags Components) SUNWfzfsg French localization for Sun Web Console ZFS administration SUNWmcon Sun Java(TM) Web Console 3.1 (Core) SUNWmconr Sun Java(TM) Web Console 3.1 (Root) SUNWmcos Implementation of Sun Java(TM) Web Console (3.1) services SUNWmcosx Implementation of Sun Java(TM) Web Console (3.1) services SUNWmctag Sun Java(TM) Web Console 3.1 (Tags Components) SUNWzfsgr ZFS Administration for Sun Java(TM) Web Console (Root) SUNWzfsgu ZFS Administration for Sun Java(TM) Web Console (Usr) -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS GUI
I do not seem to have those packages in my repository. Is there something I need to add before I will see them? Marius van Vuuren wrote: Thanks I will give it a shot. Installing OS2008.11 on a 4150 and have 2 JBODS connected to it ... would like to give our client a nice interface even if its just for show. Will let you know if it works. Richard Elling wrote: Marius van Vuuren wrote: Is there a GUI for 2008.11 for ZFS like in S10U6? It doesn't appear to be in the dev repository, but you should be able to install the packages from S10u6. If you try this, please let us know if it works. Look for packages contributing to webconsole: SUNWasu Sun Java System Application Server (usr) SUNWemcon Spanish Sun Java(TM) Web Console 3.1 (Core) SUNWemctg Spanish Sun Java(TM) Web Console 3.1 (Tags Components) SUNWezfsg Spanish localization for Sun Web Console ZFS administration SUNWfmcon French Sun Java(TM) Web Console 3.1 (Core) SUNWfmctg French Sun Java(TM) Web Console 3.1 (Tags Components) SUNWfzfsg French localization for Sun Web Console ZFS administration SUNWmcon Sun Java(TM) Web Console 3.1 (Core) SUNWmconr Sun Java(TM) Web Console 3.1 (Root) SUNWmcos Implementation of Sun Java(TM) Web Console (3.1) services SUNWmcosx Implementation of Sun Java(TM) Web Console (3.1) services SUNWmctag Sun Java(TM) Web Console 3.1 (Tags Components) SUNWzfsgr ZFS Administration for Sun Java(TM) Web Console (Root) SUNWzfsgu ZFS Administration for Sun Java(TM) Web Console (Usr) -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs null pointer deref,
Sigh. Richard points out in private email that automatic savecore functionality is disabled in OpenSolaris; you need to manually set up a dump device and save core files if you want them. However, the stack may be sufficient to ID the bug. The dump device is present, so no need to set up one. If you enable savecore using dumpadm(1m), you must create the configured savecore directory manually though. -- julien. http://blog.thilelli.net/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool status -x strangeness
Bug ID is 6793967. This problem just happened again. % zpool status pool1 pool: pool1 state: DEGRADED scrub: resilver completed after 0h48m with 0 errors on Mon Jan 5 12:30:52 2009 config: NAME STATE READ WRITE CKSUM pool1 DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c4t8d0s0 ONLINE 0 0 0 c4t9d0s0 ONLINE 0 0 0 c4t10d0s0 ONLINE 0 0 0 c4t11d0s0 ONLINE 0 0 0 c4t12d0s0 REMOVED 0 0 0 c4t13d0s0 ONLINE 0 0 0 errors: No known data errors % zpool status -x all pools are healthy % # zpool online pool1 c4t12d0s0 % zpool status -x pool: pool1 state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h0m, 0.12% done, 2h38m to go config: NAME STATE READ WRITE CKSUM pool1 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c4t8d0s0 ONLINE 0 0 0 c4t9d0s0 ONLINE 0 0 0 c4t10d0s0 ONLINE 0 0 0 c4t11d0s0 ONLINE 0 0 0 c4t12d0s0 ONLINE 0 0 0 c4t13d0s0 ONLINE 0 0 0 errors: No known data errors % Ben I just put in a (low priority) bug report on this. Ben This post from close to a year ago never received a response. We just had this same thing happen to another server that is running Solaris 10 U6. One of the disks was marked as removed and the pool degraded, but 'zpool status -x' says all pools are healthy. After doing an 'zpool online' on the disk it resilvered in fine. Any ideas why 'zpool status -x' reports all healthy while 'zpool status' shows a pool in degraded mode? thanks, Ben We run a cron job that does a 'zpool status -x' to check for any degraded pools. We just happened to find a pool degraded this morning by running 'zpool status' by hand and were surprised that it was degraded as we didn't get a notice from the cron job. # uname -srvp SunOS 5.11 snv_78 i386 # zpool status -x all pools are healthy # zpool status pool1 pool: pool1 tate: DEGRADED scrub: none requested onfig: NAME STATE READ WRITE CKSUM pool1DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 c1t8d0 REMOVED 0 0 0 c1t9d0 ONLINE 0 0 0 c1t10d0 ONLINE 0 0 0 c1t11d0 ONLINE 0 0 0 No known data errors I'm going to look into it now why the disk is listed as removed. Does this look like a bug with 'zpool status -x'? Ben -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS GUI
Marius van Vuuren wrote: I do not seem to have those packages in my repository. Is there something I need to add before I will see them? They are not in the repository, look for them on an SXCE or Solaris 10 distribution. -- richard Marius van Vuuren wrote: Thanks I will give it a shot. Installing OS2008.11 on a 4150 and have 2 JBODS connected to it ... would like to give our client a nice interface even if its just for show. Will let you know if it works. Richard Elling wrote: Marius van Vuuren wrote: Is there a GUI for 2008.11 for ZFS like in S10U6? It doesn't appear to be in the dev repository, but you should be able to install the packages from S10u6. If you try this, please let us know if it works. Look for packages contributing to webconsole: SUNWasu Sun Java System Application Server (usr) SUNWemcon Spanish Sun Java(TM) Web Console 3.1 (Core) SUNWemctg Spanish Sun Java(TM) Web Console 3.1 (Tags Components) SUNWezfsg Spanish localization for Sun Web Console ZFS administration SUNWfmcon French Sun Java(TM) Web Console 3.1 (Core) SUNWfmctg French Sun Java(TM) Web Console 3.1 (Tags Components) SUNWfzfsg French localization for Sun Web Console ZFS administration SUNWmcon Sun Java(TM) Web Console 3.1 (Core) SUNWmconr Sun Java(TM) Web Console 3.1 (Root) SUNWmcos Implementation of Sun Java(TM) Web Console (3.1) services SUNWmcosx Implementation of Sun Java(TM) Web Console (3.1) services SUNWmctag Sun Java(TM) Web Console 3.1 (Tags Components) SUNWzfsgr ZFS Administration for Sun Java(TM) Web Console (Root) SUNWzfsgu ZFS Administration for Sun Java(TM) Web Console (Usr) -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Anyone using zfs over coraid aoe?
Hello, Is anyone using zfs over coraid aoe? I was thinking about creating a bunch of single disk lblades and then mirroring or raidz them using zfs. Does anyone have any experiences they would like to share? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] cifs perfomance
Hello! I'am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal. CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage: usb c4t0d0 ST332062-0A-3.AA-298.09GB /p...@0,0/pci1458,5...@2,2/cd...@1/d...@0,0 tagra err sil3114 c5d0ST332062-9QF6EPB-0001-298.09GB /p...@0,0/pci10de,6...@8/pci-...@6/i...@0/c...@0,0tagra sil3114 c5d1ST332062-9QF6ENL-0001-298.09GB /p...@0,0/pci10de,6...@8/pci-...@6/i...@0/c...@1,0tagra sil3114 c6d0ST332062-9QF6CYK-0001-298.09GB /p...@0,0/pci10de,6...@8/pci-...@6/i...@1/c...@0,0tagra err sil3114 c6d1Hitachi-STF202MC00VYR-0001-298.09GB /p...@0,0/pci10de,6...@8/pci-...@6/i...@1/c...@1,0tagra ide c7d0ST332062-5QF2WBD-0001-298.09GB /p...@0,0/pci-...@9/i...@0/c...@0,0 tagra ide c7d1ST332062-5QF33B2-0001-298.09GB /p...@0,0/pci-...@9/i...@0/c...@1,0 tagra ide c8d0ST332062-5QF03RX-0001-298.09GB /p...@0,0/pci-...@9/i...@1/c...@0,0 tagra err ide c8d1DEFAULT cyl 1824 alt 2 hd 255 sec 63 /p...@0,0/pci-...@9/i...@1/c...@1,0 rpool sil3112 c0d0drive type unknown /p...@0,0/pci10de,6...@8/pci-...@d/i...@0/c...@0,0rpool sil3112 c1d0Hitachi-STF202MC010YK-0001-298.09GB /p...@0,0/pci10de,6...@8/pci-...@d /i...@1/c...@0,0tagra Idle zpool iostat is: capacity operationsbandwidth pool used avail read write read write -- - - - - - - rpool 71.8G 2.25G 81 19 4.90M 63.9K c6d0s071.8G 2.25G 81 19 4.90M 63.9K -- - - - - - - tagra 2.58T 41.9G 0 0 0 0 raidz22.58T 41.9G 0 0 0 0 c5d1- - 0 0 0 0 c7d0- - 0 0 0 0 c3t0d0 - - 0 0 0 0 c4d0- - 0 0 0 0 c4d1- - 0 0 0 0 c5d0- - 0 0 0 0 c8d0- - 0 0 0 0 c8d1- - 0 0 0 0 c9d0- - 0 0 0 0 -- - - - - - - zpool iostat with play movie from rpool pool is: capacity operationsbandwidth pool used avail read write read write -- - - - - - - rpool 71.7G 2.25G 58 3 6.94M 31.7K c6d0s071.7G 2.25G 58 3 6.94M 31.7K and movie is freeze while playing zpool iostat with play movie from tagra pool is: capacity operationsbandwidth pool used avail read write read writ -- - - - - - - tagra 2.58T 41.9G 53 0 675K 0 raidz22.58T 41.9G 53 0 675K 0 c5d1- - 23 0 1.28M 0 c7d0- - 20 0 1.15M 0 c3t0d0 - - 0 0 0 0 c4d0- - 15 0 870K 0 c4d1- - 17 0 956K 0 c5d0- - 19 0 1.05M 0 c8d0- - 20 0 1.15M 0 c8d1- - 20 0 1.16M 0 c9d0- - 21 0 1.18M 0 -- - - - - - - and playing is well, but explorer browsing othen freezes on seconds. What i can do in this situation? Can i tune some on this hardware, or i need to upgrade some components? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool status -x strangeness
What's the output of 'zfs upgrade' and 'zpool upgrade'? (I'm just curious - I had a similar situation which seems to be resolved now that I've gone to Solaris 10u6 or OpenSolaris 2008.11). On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller mil...@eecis.udel.edu wrote: Bug ID is 6793967. This problem just happened again. % zpool status pool1 pool: pool1 state: DEGRADED scrub: resilver completed after 0h48m with 0 errors on Mon Jan 5 12:30:52 2009 config: NAME STATE READ WRITE CKSUM pool1 DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c4t8d0s0 ONLINE 0 0 0 c4t9d0s0 ONLINE 0 0 0 c4t10d0s0 ONLINE 0 0 0 c4t11d0s0 ONLINE 0 0 0 c4t12d0s0 REMOVED 0 0 0 c4t13d0s0 ONLINE 0 0 0 errors: No known data errors % zpool status -x all pools are healthy % # zpool online pool1 c4t12d0s0 % zpool status -x pool: pool1 state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h0m, 0.12% done, 2h38m to go config: NAME STATE READ WRITE CKSUM pool1 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c4t8d0s0 ONLINE 0 0 0 c4t9d0s0 ONLINE 0 0 0 c4t10d0s0 ONLINE 0 0 0 c4t11d0s0 ONLINE 0 0 0 c4t12d0s0 ONLINE 0 0 0 c4t13d0s0 ONLINE 0 0 0 errors: No known data errors % Ben I just put in a (low priority) bug report on this. Ben This post from close to a year ago never received a response. We just had this same thing happen to another server that is running Solaris 10 U6. One of the disks was marked as removed and the pool degraded, but 'zpool status -x' says all pools are healthy. After doing an 'zpool online' on the disk it resilvered in fine. Any ideas why 'zpool status -x' reports all healthy while 'zpool status' shows a pool in degraded mode? thanks, Ben We run a cron job that does a 'zpool status -x' to check for any degraded pools. We just happened to find a pool degraded this morning by running 'zpool status' by hand and were surprised that it was degraded as we didn't get a notice from the cron job. # uname -srvp SunOS 5.11 snv_78 i386 # zpool status -x all pools are healthy # zpool status pool1 pool: pool1 tate: DEGRADED scrub: none requested onfig: NAME STATE READ WRITE CKSUM pool1DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 c1t8d0 REMOVED 0 0 0 c1t9d0 ONLINE 0 0 0 c1t10d0 ONLINE 0 0 0 c1t11d0 ONLINE 0 0 0 No known data errors I'm going to look into it now why the disk is listed as removed. Does this look like a bug with 'zpool status -x'? Ben -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs pool is full
One of my zfs pool's now is full: e...@tagra:~/store# zpool list NAMESIZE USED AVAILCAP HEALTH ALTROOT rpool74G 71.9G 2.07G97% ONLINE - tagra 2.62T 2.58T 41.9G98% DEGRADED - e...@tagra:~/store# df -h Filesystem size used avail capacity Mounted on tagra 2.0T44K 0K 100%/volumes/tagra tagra/home 2.0T32M 0K 100%/volumes/tagra/home tagra/home/epiq2.0T 1.9T 0K 100%/volumes/tagra/home/epiq tagra/home/gala2.0T78G 0K 100%/volumes/tagra/home/gala tagra/home/max 2.0T41K 0K 100%/volumes/tagra/home/max i need to erase some, but see: rm: Unable to remove directory lost-found/d: No space left on device How i can clean up my pool? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs pool is full [SEC=UNCLASSIFIED]
Hi epiq, I would copy stuff or zfs send a filesystem somewhere then destroy that file system. Then clean up some space and then copy stuff back by a scp -p or zfs send. Ta, --- Cooper Ry Lees HPC / UNIX Systems Administrator - Information Management Services (IMS) Australian Nuclear Science and Technology Organisation T +61 2 9717 3853 F +61 2 9717 9273 M +61 403 739 446 E cooper.l...@ansto.gov.au www.ansto.gov.au Important: This transmission is intended only for the use of the addressee. It is confidential and may contain privileged information or copyright material. If you are not the intended recipient, any use or further disclosure of this communication is strictly forbidden. If you have received this transmission in error, please notify me immediately by telephone and delete all copies of this transmission as well as any attachments. On 22/01/2009, at 10:12 AM, epiq wrote: One of my zfs pool's now is full: e...@tagra:~/store# zpool list NAMESIZE USED AVAILCAP HEALTH ALTROOT rpool74G 71.9G 2.07G97% ONLINE - tagra 2.62T 2.58T 41.9G98% DEGRADED - e...@tagra:~/store# df -h Filesystem size used avail capacity Mounted on tagra 2.0T44K 0K 100%/volumes/tagra tagra/home 2.0T32M 0K 100%/volumes/tagra/ home tagra/home/epiq2.0T 1.9T 0K 100%/volumes/tagra/ home/epiq tagra/home/gala2.0T78G 0K 100%/volumes/tagra/ home/gala tagra/home/max 2.0T41K 0K 100%/volumes/tagra/ home/max i need to erase some, but see: rm: Unable to remove directory lost-found/d: No space left on device How i can clean up my pool? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs pool is full
epiq wrote: One of my zfs pool's now is full: e...@tagra:~/store# zpool list NAMESIZE USED AVAILCAP HEALTH ALTROOT rpool74G 71.9G 2.07G97% ONLINE - tagra 2.62T 2.58T 41.9G98% DEGRADED - e...@tagra:~/store# df -h Filesystem size used avail capacity Mounted on tagra 2.0T44K 0K 100%/volumes/tagra tagra/home 2.0T32M 0K 100%/volumes/tagra/home tagra/home/epiq2.0T 1.9T 0K 100%/volumes/tagra/home/epiq tagra/home/gala2.0T78G 0K 100%/volumes/tagra/home/gala tagra/home/max 2.0T41K 0K 100%/volumes/tagra/home/max i need to erase some, but see: rm: Unable to remove directory lost-found/d: No space left on device How i can clean up my pool? Sounds like CR 5109744, but that was fixed long, long ago. http://bugs.opensolaris.org/view_bug.do?bug_id=5109744 The workaround described therein should work. Truncate a file which is not contained in a snapshot. The example given is: cat /dev/null /test/g3.2 You should also be able to destroy a snapshot to regain space. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] cifs perfomance
On Wed, 21 Jan 2009, epiq wrote: Hello! I'am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal. Several people reported this same problem. They changed their ethernet adaptor to an Intel ethernet interface and the performance problem went away. It was not ZFS's fault. There is always the possibility that you have a slow disk drive or bad cable. Check /var/adm/messages for any suspicious messages from drivers. Bob == Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] cifs perfomance
On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: Several people reported this same problem. They changed their ethernet adaptor to an Intel ethernet interface and the performance problem went away. It was not ZFS's fault. It may not be a ZFS problem, but it is a OpenSolaris problem. The drivers for hardware Realtek and other NICs are ... not so great. -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] replace same sized disk fails with too small error
can you recommend a walk-through for this process, or a bit more of a description? I'm not quite sure how I'd use that utility to repair the EFI label -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS GUI
Hi Richard, Currently there is a bug in Sol10u6, 6764133 causing zfs admin to crash. *Bug ID:* 6764133*Synopsis:* ZFS admin gui causing jvm SIGSEGV on s10u6 Not sure if there is a fix already. Regards, Andre W. Richard Elling wrote: Marius van Vuuren wrote: Is there a GUI for 2008.11 for ZFS like in S10U6? It doesn't appear to be in the dev repository, but you should be able to install the packages from S10u6. If you try this, please let us know if it works. Look for packages contributing to webconsole: SUNWasu Sun Java System Application Server (usr) SUNWemcon Spanish Sun Java(TM) Web Console 3.1 (Core) SUNWemctg Spanish Sun Java(TM) Web Console 3.1 (Tags Components) SUNWezfsg Spanish localization for Sun Web Console ZFS administration SUNWfmcon French Sun Java(TM) Web Console 3.1 (Core) SUNWfmctg French Sun Java(TM) Web Console 3.1 (Tags Components) SUNWfzfsg French localization for Sun Web Console ZFS administration SUNWmcon Sun Java(TM) Web Console 3.1 (Core) SUNWmconr Sun Java(TM) Web Console 3.1 (Root) SUNWmcos Implementation of Sun Java(TM) Web Console (3.1) services SUNWmcosx Implementation of Sun Java(TM) Web Console (3.1) services SUNWmctag Sun Java(TM) Web Console 3.1 (Tags Components) SUNWzfsgr ZFS Administration for Sun Java(TM) Web Console (Root) SUNWzfsgu ZFS Administration for Sun Java(TM) Web Console (Usr) -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs pool is full [SEC=UNCLASSIFIED]
oh, may be its better variant , but i haven't free 2TB storage for save all this file system =\ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss