Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread Xen Dar
Just for clarity 90% of my stuff is media mostly AVI's or MKV's so I dont think that compresses very well. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread Xen Dar
Ok so this is my solution, pls be advised I am a total linux nube so I am learning as I go along. I installed opensolaris and setup rpool as my base install on a single 1TB drive. I attached one of my NTFS drives to the system then used a utility called prtparts to get the name of the NTFS drive

Re: [zfs-discuss] Single disk parity

2009-07-08 Thread Haudy Kazemi
Adding additional data protection options are commendable. On the other hand I feel there are important gaps in the existing feature set that are worthy of a higher priority, not the least of which is the automatic recovery of uberblock / transaction group problems (see Victor Latushkin's

Re: [zfs-discuss] Very slow ZFS write speed to raw zvol

2009-07-08 Thread Jim Klimov
Do you have any older benchmarks on these cards and arrays (in their pre-ZFS life?) Perhaps this is not a ZFS regression but a hardware config issue? Perhaps there's some caching (like per-disk write-through) not enabled on the arrays? As you may know, the ability (and reliability) of such cache

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread Jim Klimov
True, correction accepted, covering my head with ashes in shame ;) We do use a custom-built package of rsync-3.0.5 with a number of their standard contributed patches applied. To be specific, these: checksum-reading.diff checksum-updating.diff detect-renamed.diff downdate.diff fileflags.diff fs

Re: [zfs-discuss] Booting from detached mirror disk

2009-07-08 Thread William Bauer
By the way, if you try my idea and both disks remain physically attached, both should be found and the mirror will be "intact", regardless of which disk you boot from. If one is physically disconnected, then you will have complaints about the missing disk, but it should still work if everything

Re: [zfs-discuss] zpool import hangs

2009-07-08 Thread William Bauer
Just trying to help since no one has responded Have you tried importing with an alternate root? We don't know your setup, such as other pools, types of controllers and/or disks, or how your pool was constructed. Try importing something like this: zpool import -R /tank2 -f pool_numeric_ide

Re: [zfs-discuss] Booting from detached mirror disk

2009-07-08 Thread William Bauer
Did you run installgrub on both disks: /usr/sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/cxtydzs0 Or the equivalent. If you can't boot from either, how did either become your boot disk? If you want to use a single mirror member disk to boot from (i.e. for testing), I wouldn

[zfs-discuss] Booting from detached mirror disk

2009-07-08 Thread Sunil Sohani
Hi, I have mirrored boot disk and I am able to boot from either disk. If I detach mirror would I be able to boot from detached disk? Thanks. Sunil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/z

Re: [zfs-discuss] Single disk parity

2009-07-08 Thread Richard Elling
Haudy Kazemi wrote: Daniel Carosone wrote: Sorry, don't have a thread reference to hand just now. http://www.opensolaris.org/jive/thread.jspa?threadID=100296 Note that there's little empirical evidence that this is directly applicable to the kinds of errors (single bit, or otherwise) th

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Daniel Carosone
> Thankyou! Am I right in thinking that rpool > snapshots will include things like swap? If so, is > there some way to exclude them? Hi Carl :) You can't exclude them from the send -R with something like --exclude, but you can make sure there are no such snapshots (which aren't useful anyway)

Re: [zfs-discuss] Single disk parity

2009-07-08 Thread Haudy Kazemi
Daniel Carosone wrote: Sorry, don't have a thread reference to hand just now. http://www.opensolaris.org/jive/thread.jspa?threadID=100296 Note that there's little empirical evidence that this is directly applicable to the kinds of errors (single bit, or otherwise) that a single failing d

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Lori Alt
On 07/08/09 15:57, Carl Brewer wrote: Thankyou! Am I right in thinking that rpool snapshots will include things like swap? If so, is there some way to exclude them? Much like rsync has --exclude? By default, the "zfs send -R" will send all the snapshots, including swap and dump. But you

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Richard Elling
Carl Brewer wrote: Thankyou! Am I right in thinking that rpool snapshots will include things like swap? If so, is there some way to exclude them? Much like rsync has --exclude? No. Snapshots are a feature of the dataset, not the pool. So you would have separate snapshot policies for eac

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Carl Brewer
Thankyou! Am I right in thinking that rpool snapshots will include things like swap? If so, is there some way to exclude them? Much like rsync has --exclude? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Enda O'Connor
Hi for sparc 119534-15 124630-26 for x86 119535-15 124631-27 higher rev's of these will also suffice. Note these need to be applied to the miniroot of the jumpstart image so that it can then install zfs flash archive. please read the README notes in these for more specific instructions, inc

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Jean-Noël Mattern
Bob, Patches that allow the creation and installation of a flash archive on a zpool are available: For SPARC: 119534-15 : fixes to the /usr/sbin/flarcreate and /usr/sbin/flar command 124630-26: updates to the install software For x86: 119535-15 : fixes to the /usr/sbin/flarcreate and /usr/sbi

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Bob Friesenhahn
On Wed, 8 Jul 2009, Fredrich Maney wrote: Any idea what the Patch ID was? x86:119535-15 SPARC: 119534 Description of change "6690473 request to have flash support for ZFS root install". Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ G

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Lori Alt
On 07/08/09 13:43, Bob Friesenhahn wrote: On Wed, 8 Jul 2009, Jerry K wrote: It has been a while since this has been discussed, and I am hoping that you can provide an update, or time estimate. As we are several months into Update 7, is there any chance of an Update 7 patch, or are we still

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Fredrich Maney
Any idea what the Patch ID was? fpsm On Wed, Jul 8, 2009 at 3:43 PM, Bob Friesenhahn wrote: > On Wed, 8 Jul 2009, Jerry K wrote: > >> It has been a while since this has been discussed, and I am hoping that >> you can provide an update, or time estimate.  As we are several months into >> Update 7,

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Bob Friesenhahn
On Wed, 8 Jul 2009, Jerry K wrote: It has been a while since this has been discussed, and I am hoping that you can provide an update, or time estimate. As we are several months into Update 7, is there any chance of an Update 7 patch, or are we still waiting for Update 8. I saw that a Solari

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Jerry K
Hello Lori, It has been a while since this has been discussed, and I am hoping that you can provide an update, or time estimate. As we are several months into Update 7, is there any chance of an Update 7 patch, or are we still waiting for Update 8. Also, can you share the CR # that you ment

Re: [zfs-discuss] "Poor Man's Cluster" using zpool export and zpool import

2009-07-08 Thread Shawn Joy
Thanks Cindy and Darren -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] surprisingly poor performance

2009-07-08 Thread Miles Nordin
> "pe" == Peter Eriksson writes: pe> With c1t15d0s0 added as log it takes 1:04.2, but with the same pe> c1t15d0s0 added, but wrapped inside a SVM metadevice the same pe> operation takes 10.4 seconds... so now SVM discards cache flushes, too? great. pgpFnpp1mdyTO.pgp Descriptio

Re: [zfs-discuss] ZFS write I/O stalls

2009-07-08 Thread John Wythe
> I was all ready to write about my frustrations with > this problem, but I upgraded to snv_117 last night to > fix some iscsi bugs and now it seems that the write > throttling is working as described in that blog. I may have been a little premature. While everything is much improved for Samba an

Re: [zfs-discuss] Single disk parity

2009-07-08 Thread Mark J Musante
On Wed, 8 Jul 2009, Moore, Joe wrote: The copies code is nice because it tries to put each copy "far away" from the others. This does have a significant performance impact when on a single spindle, however, because each logical write will be written "here" and then a disk seek to write it to

Re: [zfs-discuss] ZFS write I/O stalls

2009-07-08 Thread John Wythe
> This causes me to believe that the algorithm is not > implemented as described in Solaris 10. I was all ready to write about my frustrations with this problem, but I upgraded to snv_117 last night to fix some iscsi bugs and now it seems that the write throttling is working as described in tha

Re: [zfs-discuss] Single disk parity

2009-07-08 Thread Moore, Joe
Christian Auby wrote: > It's not quite like copies as it's not actually a copy of the data I'm > talking about. 10% parity or even 5% could easily fix most disk errors > that won't result in a total disk loss. (snip) > I don't see a performance issue if it's not enabled by default though. The co

Re: [zfs-discuss] "Poor Man's Cluster" using zpool export and zpool import

2009-07-08 Thread Cindy . Swearingen
Hi Shawn, I have no experience with this configuration, but you might review the information in this blog: http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end ZFS is not a cluster file system and yes, possible data corruption issues exist. Eric mentions this in his blog. You might al

Re: [zfs-discuss] recover from zfs

2009-07-08 Thread Kees Nuyt
(in the spirit of open source, directed back to the list) On Wed, 8 Jul 2009 14:51:55 + (GMT), Stephen C. Bond wrote: >Kees, > > can you provide an example of how to read from dd > cylinder by cylinder or even better by exact coordinates? That's hard to do, many disks don't tell you the re

Re: [zfs-discuss] "Poor Man's Cluster" using zpool export and zpool import

2009-07-08 Thread Darren J Moffat
Shawn Joy wrote: Is it supported to use zpool export and zpool import to manage disk access between two nodes that have access to the same storage device. What issues exist if the host currently owning the zpool goes down? In this case will using zpool import -f work? Is there possible data corr

[zfs-discuss] "Poor Man's Cluster" using zpool export and zpool import

2009-07-08 Thread Shawn Joy
Is it supported to use zpool export and zpool import to manage disk access between two nodes that have access to the same storage device. What issues exist if the host currently owning the zpool goes down? In this case will using zpool import -f work? Is there possible data corruption issues?

Re: [zfs-discuss] recover data after zpool create

2009-07-08 Thread Carson Gaspar
stephen bond wrote: can you provide an example of how to read from dd cylinder by cylinder? What's a cylinder? That's a meaningless term these days. You dd byte ranges. Pick whatever byte range you want. If you want mythical cylinders, fetch the cylinder size from "format" and use that as yo

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread David Magda
On Wed, July 8, 2009 11:55, Jim Klimov wrote: > My typical runs between Unix hosts look like: > > solaris# cd /pool/dumpstore/databases > solaris# while ! rsync -vaP --stats --exclude='*.bak' --exclude='temp' > --partial --append source:/DUMP/snapshots/mysql . ; do sleep 5; echo > "= `date`:

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread Jim Klimov
I meant to add that due to the sheer amount of data (and time needed) to copy, you really don't want to use copying tools which abort on error, such as MS Explorer. Normally I'd suggest something like FAR in Windows or Midnight Commander in Unix to copy over networked connections (CIFS shares), o

[zfs-discuss] Very slow ZFS write speed to raw zvol

2009-07-08 Thread Leon Verrall
Guys, Have an opensolairs x86 box running: SunOS thsudfile01 5.11 snv_111b i86pc i386 i86pc Solaris This has 2 old qla2200 1Gbit FC cards attached. Each bus is connected to an old transtec F/C raid array. This has a couple of large luns that form a single large zpool: r...@thsudfile01:~# zpoo

Re: [zfs-discuss] recover data after zpool create

2009-07-08 Thread stephen bond
Kees, can you provide an example of how to read from dd cylinder by cylinder? also if a file is fragmented is there a marker at the end of the first piece telling where is the second? Thank you stephen -- This message posted from opensolaris.org ___

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Darren J Moffat
Carl Brewer wrote: G'day, I'm putting together a LAN server with a couple of terabyte HDDs as a mirror (zfs root) on b117 (updated 2009.06). I want to back up snapshots of all of rpool to a removable drive on a USB port - simple & cheap backup media for a two week rolling DR solution - ie: onc

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread Jim Klimov
First of all, as other posters stressed, your data is not safe by being stored in a single copy, in the first place. Before doing anything to it, make a backup and test the backup if anyhow possible. At least, do it to any data that is more worth than the rest of it ;) As it was stressed in oth

[zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Carl Brewer
G'day, I'm putting together a LAN server with a couple of terabyte HDDs as a mirror (zfs root) on b117 (updated 2009.06). I want to back up snapshots of all of rpool to a removable drive on a USB port - simple & cheap backup media for a two week rolling DR solution - ie: once a week a HDD gets

Re: [zfs-discuss] NFS load balancing / was: ZFS, ESX , and NFS. oh my!

2009-07-08 Thread Nils Goroll
Hi Miles and All, this is off-topic, but as the discussion has started here: Finally, *ALL THIS IS COMPLETELY USELESS FOR NFS* because L4 hashing can only split up separate TCP flows. The reason why I have spend some time with http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6817942

Re: [zfs-discuss] [nfs-discuss] NFS, ZFS & ESX

2009-07-08 Thread Roch
erik.ableson writes: > Comments in line. > > On 7 juil. 09, at 19:36, Dai Ngo wrote: > > > Without any tuning, the default TCP window size and send buffer size > > for NFS > > connections is around 48KB which is not very optimal for bulk > > transfer. However > > the 1.4MB/s write

Re: [zfs-discuss] Hanging receive

2009-07-08 Thread Andrew Robert Nicols
On Wed, Jul 08, 2009 at 09:41:12AM +0100, Andrew Robert Nicols wrote: > On Wed, Jul 08, 2009 at 08:31:54PM +1200, Ian Collins wrote: > > Andrew Robert Nicols wrote: > > > >> The thumper unning 112 has continued to experience the issues described by > >> Ian and others. I've just upgraded to 117 and

Re: [zfs-discuss] Problem with mounting ZFS from USB drive

2009-07-08 Thread Victor Latushkin
On 08.07.09 12:30, Darren J Moffat wrote: Karl Dalen wrote: I'm a new user of ZFS and I have an external USB drive which contains a ZFS pool with file system. It seems that it does not get auto mounted when I plug in the drive. I'm running osol-0811. How can I manually mount this drive? It has

Re: [zfs-discuss] Hanging receive

2009-07-08 Thread Andrew Robert Nicols
On Wed, Jul 08, 2009 at 08:31:54PM +1200, Ian Collins wrote: > Andrew Robert Nicols wrote: > >> The thumper unning 112 has continued to experience the issues described by >> Ian and others. I've just upgraded to 117 and am having even more issues - >> I'm unable to receive or roll back snapshots, i

Re: [zfs-discuss] Hanging receive

2009-07-08 Thread Ian Collins
Andrew Robert Nicols wrote: On Wed, Jul 08, 2009 at 08:43:17AM +1200, Ian Collins wrote: Ian Collins wrote: Brent Jones wrote: On Fri, Jul 3, 2009 at 8:31 PM, Ian Collins wrote: Ian Collins wrote: I was doing an incremental send between pools, the rec

Re: [zfs-discuss] Problem with mounting ZFS from USB drive

2009-07-08 Thread Darren J Moffat
Karl Dalen wrote: I'm a new user of ZFS and I have an external USB drive which contains a ZFS pool with file system. It seems that it does not get auto mounted when I plug in the drive. I'm running osol-0811. How can I manually mount this drive? It has a pool named rpool on it. Is there any diag

Re: [zfs-discuss] Hanging receive

2009-07-08 Thread Andrew Robert Nicols
On Wed, Jul 08, 2009 at 08:43:17AM +1200, Ian Collins wrote: > Ian Collins wrote: >> Brent Jones wrote: >>> On Fri, Jul 3, 2009 at 8:31 PM, Ian Collins wrote: >>> Ian Collins wrote: > I was doing an incremental send between pools, the receive side is > locked up and no zfs/z

Re: [zfs-discuss] [nfs-discuss] NFS, ZFS & ESX

2009-07-08 Thread erik.ableson
Comments in line. On 7 juil. 09, at 19:36, Dai Ngo wrote: Without any tuning, the default TCP window size and send buffer size for NFS connections is around 48KB which is not very optimal for bulk transfer. However the 1.4MB/s write seems to indicate something else is seriously wrong. My

Re: [zfs-discuss] surprisingly poor performance

2009-07-08 Thread Peter Eriksson
Oh, and for completeness: If I wrap 'c1t12d0s0' inside a SVM metadevice to and use that to create the "TEST" zpool (without a log) I run the same test command in 36.3 seconds... Ie: # metadb -f -a -c3 c1t13d0s0 # metainit d0 1 1 c1t13d0s0 # metainit d2 1 1 c1t12d0s0 # zpool create TEST /dev/md/d

Re: [zfs-discuss] surprisingly poor performance

2009-07-08 Thread Peter Eriksson
You might wanna try one thing I just noticed - wrap the log device inside a SVM (disksuite) metadevice - makes wonders for the performance on my test server (Sun Fire X4240)... I do wonder what the downsides might be (except for having to fiddle with Disksuite again). Ie: # zpool create TEST c1