Re: [zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Richard Elling
Joubert Nel wrote: If the device was actually in use on another system, I would expect that libdiskmgmt would have warned you about this when you ran "zpool create". AFAIK, libdiskmgmt is not multi-node aware. It does know about local uses of the disk. Remote uses of the disk, especially thos

[zfs-discuss] ZIL on user specified devices?

2007-06-21 Thread Bryan Wagoner
Quick question, Are there any tunables, or is there any way to specify devices in a pool to use for the ZIL specifically? I've been thinking through architectures to mitigate performance problems on SAN and various other storage technologies where disabling ZIL or cache flushes has been necessa

Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Ian Collins
Roch Bourbonnais wrote: > > Le 20 juin 07 à 04:59, Ian Collins a écrit : > >>> >> I'm not sure why, but when I was testing various configurations with >> bonnie++, 3 pairs of mirrors did give about 3x the random read >> performance of a 6 disk raidz, but with 4 pairs, the random read >> performance

Re: [zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Eric Schrock
On Thu, Jun 21, 2007 at 11:03:39AM -0700, Joubert Nel wrote: > > When I ran "zpool create", the pool got created without a warning. zpool(1M) will diallow creation of the disk if it contains data in active use (mounted fs, zfs pool, dump device, swap, etc). It will warn if it contains a recogni

Re: [zfs-discuss] Proper way to detach attach

2007-06-21 Thread Will Murnane
Run "cfgadm" to see what ports are recognized as hotswappable. Run "cfgadm -cunconfigure portname" and then make sure it's logically disconnected with "cfgadm", then pull the disk and put it in another port. Then run "cfgadm -cconfigure newport" and it'll be ready to be imported again. Will ___

[zfs-discuss] Re: New german white paper on ZFS

2007-06-21 Thread mario heimel
good work! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: Best practice for moving FS between pool on same machine?

2007-06-21 Thread Chris Quenelle
Sorry I can't volunteer to test your script. I want to do the steps by hand to make sure I understand them. If I have to do it all again, I'll get in touch. Thanks for the advice! --chris Constantin Gonzalez wrote: Hi, Chris Quenelle wrote: Thanks, Constantin! That sounds like the right an

[zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Joubert Nel
> Joubert Nel wrote: > > Hi, > > > > If I add an entire disk to a new pool by doing > "zpool create", is this > > reversible? > > > > I.e. if there was data on that disk (e.g. it was > the sole disk in a zpool > > in another system) can I get this back or is zpool > create destructive? > > Short

Re: [zfs-discuss] Bug in "zpool history"

2007-06-21 Thread eric kustarz
On Jun 21, 2007, at 8:47 AM, Niclas Sodergard wrote: Hi, I was playing around with NexentaCP and its zfs boot facility. I tried to figure out how what commands to run and I ran zpool history like this # zpool history 2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED] 2007-06-20.10:20:

Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Richard Elling
Mario Goebbels wrote: Because you have to read the entire stripe (which probably spans all the disks) to verify the checksum. Then I have a wrong idea of what a stripe is. I always thought it's the interleave block size. Nope. A stripe generally refers to the logical block as spread across p

Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Roch Bourbonnais
Le 20 juin 07 à 04:59, Ian Collins a écrit : I'm not sure why, but when I was testing various configurations with bonnie++, 3 pairs of mirrors did give about 3x the random read performance of a 6 disk raidz, but with 4 pairs, the random read performance dropped by 50%: 3x2 Blockread: 22

[zfs-discuss] Re: marvell88sx error in command 0x2f: status 0x51

2007-06-21 Thread Rob Logan
> [hourly] marvell88sx error in command 0x2f: status 0x51 ah, its some kinda SMART or FMA query that model WDC WD3200JD-00KLB0 firmware 08.05J08 serial number WD-WCAMR2427571 supported features: 48-bit LBA, DMA, SMART, SMART self-test SATA1 compatible capacity = 625142448 sectors drives d

[zfs-discuss] Proper way to detach attach

2007-06-21 Thread Gary Gendel
Hi, I've got some issues with my 5-disk SATA stack using two controllers. Some of the ports are acting strangely, so I'd like to play around and change which ports the disks are connected to. This means that I need to bring down the pool, swap some connections and then bring the pool back up. I

[zfs-discuss] Bug in "zpool history"

2007-06-21 Thread Niclas Sodergard
Hi, I was playing around with NexentaCP and its zfs boot facility. I tried to figure out how what commands to run and I ran zpool history like this # zpool history 2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED] 2007-06-20.10:20:03 zfs clone syspool/[EMAIL PROTECTED] syspool/myrootfs

Re: [zfs-discuss] creating pool on slice which is mounted

2007-06-21 Thread Tim Foster
On Thu, 2007-06-21 at 06:16 -0700, satish s nandihalli wrote: > Part TagFlag Cylinders SizeBlocks > 7 homewm3814 - 49769 63.11GB(45956/0/0) 132353280 > > --- If i run the command zpool create <7th slice> (shown > above which is mounted as

[zfs-discuss] creating pool on slice which is mounted

2007-06-21 Thread satish s nandihalli
partition> p Current partition table (original): Total disk cylinders available: 49771 + 2 (reserved cylinders) Part TagFlag Cylinders SizeBlocks 7 homewm3814 - 49769 63.11GB(45956/0/0) 132353280 --- If i run the command zpool create <7th

Re: [zfs-discuss] ZFS-fuse on linux

2007-06-21 Thread Pawel Jakub Dawidek
On Wed, Jun 20, 2007 at 01:25:35PM -0700, mario heimel wrote: > Linux is the first operating system that can boot from RAID-1+0, RAID-Z or > RAID-Z2 ZFS, really cool trick to put zfs-fuse in the initramfs. > ( Solaris can only boot from single-disk or RAID-1 pools ) > > > http://www.linuxworld.

Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Mario Goebbels
> Because you have to read the entire stripe (which probably spans all the > disks) to verify the checksum. Then I have a wrong idea of what a stripe is. I always thought it's the interleave block size. -mg signature.asc Description: This is a digitally signed message part _

Re: [zfs-discuss] Undo/reverse zpool create

2007-06-21 Thread James C. McPherson
Joubert Nel wrote: Hi, If I add an entire disk to a new pool by doing "zpool create", is this reversible? I.e. if there was data on that disk (e.g. it was the sole disk in a zpool in another system) can I get this back or is zpool create destructive? Short answer: you're stuffed, and no, it's

[zfs-discuss] Undo/reverse zpool create

2007-06-21 Thread Joubert Nel
Hi, If I add an entire disk to a new pool by doing "zpool create", is this reversible? I.e. if there was data on that disk (e.g. it was the sole disk in a zpool in another system) can I get this back or is zpool create destructive? Joubert This message posted from opensolaris.org _

[zfs-discuss] Re: [Fwd: What Veritas is saying vs ZFS]

2007-06-21 Thread Craig Morgan
Also introduces the Veritas sfop utility, which is the 'simplified' front-end to VxVM/VxFS. As "imitation is the sincerest form of flattery", this smacks of a desperate attempt to prove to their customers that Vx can be just as slick as ZFS. More details at

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-21 Thread Roch - PAE
Joe S writes: > After researching this further, I found that there are some known > performance issues with NFS + ZFS. I tried transferring files via SMB, and > got write speeds on average of 25MB/s. > > So I will have my UNIX systems use SMB to write files to my Solaris server. > This seem

Re: [zfs-discuss] Re: Best practice for moving FS between pool on same machine?

2007-06-21 Thread Constantin Gonzalez
Hi, Chris Quenelle wrote: > Thanks, Constantin! That sounds like the right answer for me. > Can I use send and/or snapshot at the pool level? Or do I have > to use it on one filesystem at a time? I couldn't quite figure this > out from the man pages. the ZFS team is working on a zfs send -r (r