[zfs-discuss] Logical Units and ZFS send / receive

2010-08-04 Thread Terry Hull
I have a logical unit created with sbdadm create-lu that it I replicating with zfs send / receive between 2 build 134 hosts. The these LUs are iSCSI targets used as VMFS filesystems and ESX RDMs mounted on a Windows 2003 machine. The zfs pool names are the same on both machines. The replicat

[zfs-discuss] Corrupt file without filename

2010-08-04 Thread valrh...@gmail.com
I have one corrupt file in my rpool, but when I run "zpool status -v", I don't get a filename, just an address. Any idea how to fix this? Here's the output: p...@dellt7500:~# zpool status -v rpool pool: rpool state: ONLINE status: One or more devices has experienced an error resulting in data

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Roch
Ross Walker writes: > On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais > wrote: > > > > > Le 27 mai 2010 à 07:03, Brent Jones a écrit : > > > >> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly > >> wrote: > >>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:

Re: [zfs-discuss] LTFS and LTO-5 Tape Drives

2010-08-04 Thread Joerg Schilling
"valrh...@gmail.com" wrote: > Has anyone looked into the new LTFS on LTO-5 for tape backups? Any idea how > this would work with ZFS? I'm presuming ZFS send / receive are not going to > work. But it seems rather appealing to have the metadata properly with the > data, and being able to browse

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 3:52 AM, Roch wrote: > > Ross Walker writes: > >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >>> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly wrote: > I've set up an iScsi volume on

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Roch
Ross Asks: So on that note, ZFS should disable the disks' write cache, not enable them despite ZFS's COW properties because it should be resilient. No, because ZFS builds resiliency on top of unreliable parts. it's able to deal with contained failures (lost state) of the disk write

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Matt Connolly
On 04/08/2010, at 2:13, Roch Bourbonnais wrote: > > Le 27 mai 2010 à 07:03, Brent Jones a écrit : > >> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly >> wrote: >>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands: >>> >>> sh-4.0# zfs create rpool/iscsi >>> sh-4.0# zfs

[zfs-discuss] ZFS performance Tuning

2010-08-04 Thread TAYYAB REHMAN
Hi, i am working with ZFS now a days, i am facing some performance issues from application team, as they said writes are very slow in ZFS w.r.t UFS. Kindly send me some good reference or books links. i will be very thankful to you. BR, Tayyab ___ zfs

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 9:20 AM, Roch wrote: > > > Ross Asks: > So on that note, ZFS should disable the disks' write cache, > not enable them despite ZFS's COW properties because it > should be resilient. > > No, because ZFS builds resiliency on top of unreliable parts. it's able to > deal

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Bob Friesenhahn
On Tue, 3 Aug 2010, Eduardo Bragatto wrote: You're a funny guy. :) Let me re-phrase it: I'm sure I'm getting degradation in performance as my applications are waiting more on I/O now than they used to do (based on CPU utilization graphs I have). The impression part, is that the reason is the

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Eduardo Bragatto
On Aug 4, 2010, at 12:26 AM, Richard Elling wrote: The tipping point for the change in the first fit/best fit allocation algorithm is now 96%. Previously, it was 70%. Since you don't specify which OS, build, or zpool version, I'll assume you are on something modern. I'm running Solaris 10

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Eduardo Bragatto
On Aug 4, 2010, at 12:20 AM, Khyron wrote: I notice you use the word "volume" which really isn't accurate or appropriate here. Yeah, it didn't seem right to me, but I wasn't sure about the nomenclature, thanks for clarifying. You may want to get a bit more specific and choose from the old

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Roch
Ross Walker writes: > On Aug 4, 2010, at 9:20 AM, Roch wrote: > > > > > > > Ross Asks: > > So on that note, ZFS should disable the disks' write cache, > > not enable them despite ZFS's COW properties because it > > should be resilient. > > > > No, because ZFS builds resilienc

Re: [zfs-discuss] snapshot space - miscalculation?

2010-08-04 Thread Scott Meilicke
Are there other file systems underneath daten/backups that have snapshots? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] LTFS and LTO-5 Tape Drives

2010-08-04 Thread valrh...@gmail.com
Actually, no. I could care less about incrementals, and multivolume handling. My purpose is to have occasional, long-term archival backup of big experimental data sets. The challenge is keeping everything organized, and readable several years later, where I only need to recall a small subset of

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 12:04 PM, Roch wrote: > > Ross Walker writes: >> On Aug 4, 2010, at 9:20 AM, Roch wrote: >> >>> >>> >>> Ross Asks: >>> So on that note, ZFS should disable the disks' write cache, >>> not enable them despite ZFS's COW properties because it >>> should be resilient. >>> >

Re: [zfs-discuss] Corrupt file without filename

2010-08-04 Thread Cindy Swearingen
Maybe it is a temporary file. You might try running a scrub to see if it goes away. I would also use fmdump -eV to see if this disk is having problems. Thanks, Cindy On 08/04/10 01:05, valrh...@gmail.com wrote: I have one corrupt file in my rpool, but when I run "zpool status -v", I don't g

Re: [zfs-discuss] LTFS and LTO-5 Tape Drives

2010-08-04 Thread David Magda
On Wed, August 4, 2010 12:25, valrh...@gmail.com wrote: > Actually, no. I could care less about incrementals, and multivolume > handling. My purpose is to have occasional, long-term archival backup of > big experimental data sets. The challenge is keeping everything organized, > and readable severa

Re: [zfs-discuss] Logical Units and ZFS send / receive

2010-08-04 Thread Richard Elling
On Aug 3, 2010, at 11:58 PM, Terry Hull wrote: > I have a logical unit created with sbdadm create-lu that it I replicating > with zfs send / receive between 2 build 134 hosts. The these LUs are iSCSI > targets used as VMFS filesystems and ESX RDMs mounted on a Windows 2003 > machine. The zf

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Bob Friesenhahn
On Wed, 4 Aug 2010, Eduardo Bragatto wrote: Checking with iostat, I noticed the average wait time to be between 40ms and 50ms for all disks. Which doesn't seem too bad. Actually, this is quite high. I would not expect such long wait times except for when under extreme load such as a benchma

Re: [zfs-discuss] ZFS performance Tuning

2010-08-04 Thread Richard Elling
On Aug 4, 2010, at 3:22 AM, TAYYAB REHMAN wrote: > Hi, > i am working with ZFS now a days, i am facing some performance issues > from application team, as they said writes are very slow in ZFS w.r.t UFS. > Kindly send me some good reference or books links. i will be very thankful to > you.

[zfs-discuss] vdev using more space

2010-08-04 Thread Karl Rossing
Hi, We have a server running b134. The server runs xen and uses a vdev as the storage. The xen image is running nevada 134. I took a snapshot last night to move the xen image to another server. NAME USED AVAIL REFER MOUNTPOINT vpool/host/snv_130 32.8G

[zfs-discuss] How to identify user-created zfs filesystems?

2010-08-04 Thread Peter Taps
Folks, In my application, I need to present user-created filesystems. For my test, I created a zfs pool called mypool and two file systems called cifs1 and cifs2. However, when I run "zfs list," I see a lot more entries: # zfs list NAME USED AVAIL REFER MOUNTPOINT mypool

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Eduardo Bragatto
On Aug 4, 2010, at 11:18 AM, Bob Friesenhahn wrote: Assuming that your impressions are correct, are you sure that your new disk drives are similar to the older ones? Are they an identical model? Design trade-offs are now often resulting in larger capacity drives with reduced performance.

Re: [zfs-discuss] How to identify user-created zfs filesystems?

2010-08-04 Thread Cindy Swearingen
Hi Peter, I don't think we have any property that determines who created the file system. Would this work instead: # zfs list -r mypool NAME USED AVAIL REFER MOUNTPOINT mypool 172K 134G33K /mypool mypool/cifs131K 134G31K /mypool/cifs1 mypool/cifs231K

Re: [zfs-discuss] How to identify user-created zfs filesystems?

2010-08-04 Thread Mark J Musante
You can use 'zpool history -l syspool' to show the username of the person who created the dataset. The history is in a ring buffer, so if too many pool operations have happened since the dataset was created, the information is lost. On Wed, 4 Aug 2010, Peter Taps wrote: Folks, In my app

Re: [zfs-discuss] Corrupt file without filename

2010-08-04 Thread valrh...@gmail.com
Oooh... Good call! I scrubbed the pool twice, then it showed a real filename from an old snapshot that I had attempted to delete before (like a month ago), and gave an error, which I subsequently forgot about. I deleted the snapshot and cleaned up a few other snaphots, cleared the error, rescru

Re: [zfs-discuss] Logical Units and ZFS send / receive

2010-08-04 Thread Terry Hull
> From: Richard Elling > Date: Wed, 4 Aug 2010 11:05:21 -0700 > Subject: Re: [zfs-discuss] Logical Units and ZFS send / receive > > On Aug 3, 2010, at 11:58 PM, Terry Hull wrote: >> I have a logical unit created with sbdadm create-lu that it I replicating >> with zfs send / receive between 2 bu

Re: [zfs-discuss] Corrupt file without filename

2010-08-04 Thread Cindy Swearingen
Because this is a non-redundant root pool, you should still check fmdump -eV to make sure the corrupted files aren't due to some ongoing disk problems. cs On 08/04/10 13:45, valrh...@gmail.com wrote: Oooh... Good call! I scrubbed the pool twice, then it showed a real filename from an old snaps

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Bob Friesenhahn
On Wed, 4 Aug 2010, Eduardo Bragatto wrote: I will also start using rsync v3 to reduce the memory foot print, so I might be able to give back some RAM to ARC, and I'm thinking maybe going to 16GB RAM, as the pool is quite large and I'm sure more ARC wouldn't hurt. It is definitely a wise ide

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Richard Elling
On Aug 4, 2010, at 9:03 AM, Eduardo Bragatto wrote: > On Aug 4, 2010, at 12:26 AM, Richard Elling wrote: > >> The tipping point for the change in the first fit/best fit allocation >> algorithm is >> now 96%. Previously, it was 70%. Since you don't specify which OS, build, >> or zpool version, I'

[zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Chris Josephes
I have a host running svn_133 with a root mirror pool that I'd like to rebuild with a fresh install on new hardware; but I still have data on the pool that I would like to preserve. Given a rpool with disks c7d0s0 and c6d0s0, I think the following process will do what I need: 1. Run these comm

Re: [zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Chris Josephes > > I have a host running svn_133 with a root mirror pool that I'd like to > rebuild with a fresh install on new hardware; but I still have data on > the pool that I would like t

Re: [zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Mark Musante
You can also use the "zpool split" command and save yourself having to do the zfs send|zfs recv step - all the data will be preserved. "zpool split rpool preserve" does essentially everything up to and including the "zpool export preserve" commands you listed in your original email. Just don'

Re: [zfs-discuss] Logical Units and ZFS send / receive

2010-08-04 Thread Richard Elling
On Aug 4, 2010, at 1:27 PM, Terry Hull wrote: >> From: Richard Elling >> Date: Wed, 4 Aug 2010 11:05:21 -0700 >> Subject: Re: [zfs-discuss] Logical Units and ZFS send / receive >> >> On Aug 3, 2010, at 11:58 PM, Terry Hull wrote: >>> I have a logical unit created with sbdadm create-lu that it I r

Re: [zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Chris Josephes
> > > So, after rebuilding, you don't want to restore the > same OS that you're > currently running. But there are some files you'd > like to save for after > you reinstall. Why not just copy them off somewhere, > in a tarball or > something like that? It's about 200+ gigs of files. If I had a

Re: [zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Chris Josephes
> You can also use the "zpool split" command and save > yourself having to do the zfs send|zfs recv step - > all the data will be preserved. > > "zpool split rpool preserve" does essentially > everything up to and including the "zpool export > preserve" commands you listed in your original email.

Re: [zfs-discuss] Logical Units and ZFS send / receive

2010-08-04 Thread Terry Hull
> From: Richard Elling > Date: Wed, 4 Aug 2010 18:40:49 -0700 > To: Terry Hull > Cc: "zfs-discuss@opensolaris.org" > Subject: Re: [zfs-discuss] Logical Units and ZFS send / receive > > On Aug 4, 2010, at 1:27 PM, Terry Hull wrote: >>> From: Richard Elling >>> Date: Wed, 4 Aug 2010 11:05:21