Re: [zfs-discuss] Adding ZIL to pool questions

2010-08-01 Thread Jim Doyle
You probably would not notice the performance effects of a SSD ZIL on a home 
network ; so the price of the ticket may not be worth the ride for you. OTOH,
you would notice a significant improvement by using that SSD as an L2ARC 
device. Because the head latency on consumer 1TB drives is so long, the L2ARC 
would definitely make access to pool feel faster because the working 
footprint of files that your applications frequently reference will sit up 
front in the L2 cache ; meanwhile archival and infrequently touched items would 
park in the storage pool cylinders.

On a small office network, OTOH, ZIL makes a big difference. For instance, if 
you had 10 software developers, with their home directories all exported from a 
ZFS box, adding a NVRAM ZIL will significantly improve performance. That's 
because developers often compile hundreds of files at a time, several times per 
hour, plus updates to files' atime attr - and that particular scale of 
operation will be greatly improved by an NVRAM ZIL.

If I were to use a ZIL again, i'd use something like the ACARD DDR-2 SATA
boxes, and not an SSD or an iRAM.

-- Jim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-07-31 Thread Jim Doyle
A solution to this problem would be my early Christmas present! 

Here is how I lost access to an otherwise healthy mirrored pool two months ago:

Box running snv_130 with two disks in a mirror and an iRAM battery-backed
ZIL device was shutdown orderly and powered down normally.  While I was away
on travel, the PSU in the PC died while in its lowest-power standby state - this
caused the Li battery in the iRAM to discharge and all of the SLOG contents in 
the DRAM went poof.

Powered box back up... zpool import -f tank failed to bring the pool back 
online.
After much research, I found the 'logfix' tool, got it compile on another 
snv_122 box and followed the directions to synthesize a forged log device 
header using the guid of the original device extracted from vdev list.  This 
failed to work
despite the binary tool running and some inspection of the guids using zdb -l 
spoofed_new_logdev.

What's intrigueing is that zpool is not even properly reporting the 'missing 
device'.  See the output below from zpool, then zdb - notice that zdb shows
the remnants of a vdev for a log device but with guid = 0 


# zpool import
  pool: tank
id: 6218740473633775200
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:
[b]
tank UNAVAIL  missing device
  mirror-0   ONLINE
c0t1d0   ONLINE
c0t2d0   ONLINE
[/b]
Additional devices are known to be part of this pool, though their




 # zdb -e tank

Configuration for import:
vdev_children: 2
version: 22
pool_guid: 6218740473633775200
name: 'tank'
state: 0
hostid: 9271202
hostname: 'eon'
vdev_tree:
type: 'root'
id: 0
guid: 6218740473633775200
children[0]:
type: 'mirror'
id: 0
guid: 5245507142600321917
metaslab_array: 23
metaslab_shift: 33
ashift: 9
asize: 1000188936192
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 15634594394239615149
phys_path: '/p...@0,0/pci1458,b...@11/d...@2,0:a'
whole_disk: 1
DTL: 55
path: '/dev/dsk/c0t1d0s0'
devid: 
'id1,s...@sata_st31000333as9te1jx8c/a'
children[1]:
type: 'disk'
id: 1
guid: 3144903288495510072
phys_path: '/p...@0,0/pci1458,b...@11/d...@1,0:a'
whole_disk: 1
DTL: 54
path: '/dev/dsk/c0t2d0s0'
devid: 
'id1,s...@sata_st31000528as9vp2kwam/a'
[b]
children[1]:
type: 'missing'
id: 1
guid: 0
[/b]
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss