I'm curious about if there are any potential problems with using LVM 
metadevices as ZFS zpool targets. I have a couple of situations where using a 
device directly by ZFS causes errors on the console about "Bus  and lots of 
"stalled" I/O. But as soon as I wrap that device inside an LVM metadevice and 
then use it in the ZFS zpool things work perfectly fine and smoothly (no 
stalls).

Situation 1 is when trying to use Intel X25-E or X25-M SSD disks in a Sun X4240 
server with the LSI SAS controller - I never could get things to run without 
errors no matter what. (Tried multiple LSI controllers and multiple SSD disks). 

Jul  8 09:43:31 merope scsi: [ID 365881 kern.info] 
/p...@0,0/pci10de,3...@f/pci1000,3...@0 (mpt0):
Jul  8 09:43:31 merope  Log info 31126000 received for target 15.
Jul  8 09:43:31 merope  scsi_status=0, ioc_status=804b, scsi_state=c
Jul  8 09:43:31 merope scsi: [ID 365881 kern.info] 
/p...@0,0/pci10de,3...@f/pci1000,3...@0 (mpt0):
Jul  8 09:43:31 merope  Log info 31126000 received for target 15.
Jul  8 09:43:31 merope  scsi_status=0, ioc_status=804b, scsi_state=c
Jul  8 09:43:31 merope scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@f,0 (sd32):
Jul  8 09:43:31 merope  Error for Command: write                   Error Level: 
Retryable
Jul  8 09:43:31 merope scsi: [ID 107833 kern.notice]    Requested Block: 64256  
                   Error Block: 64256
Jul  8 09:43:31 merope scsi: [ID 107833 kern.notice]    Vendor: ATA             
                   Serial Number: CVEM8493
00BM
Jul  8 09:43:31 merope scsi: [ID 107833 kern.notice]    Sense Key: Unit 
Attention
Jul  8 09:43:31 merope scsi: [ID 107833 kern.notice]    ASC: 0x29 (power on, 
reset, or bus reset occurred), ASCQ: 0x0, FRU
: 0x0

Situation 2 is when I installed an X25-E in a X4500 Thumper. Here I didn't see 
any errors on the console of the server, but performance would at regular 
intervals "drop" to zero (felt the same as in the LSI case above, just without 
the console errors).

(In situation 1 above things would work perfectly fine when I was using an 
Adaptec controller instead).

Anyway, when I put the 4GB partition of that SSD disk that I was using for 
testing inside a simple LVM metadevice all errors vanished and performance 
increased many many times. And no  "hickups".

But I wonder... Is there anything in a setup like this that might be dangerous 
- something that might come back and bite me in the future? LVM(disksuite) is 
really mature technology and something I've been using without problems on many 
servers for many years so I think it can be trusted but anyway...?

(I use the partition of that SSD-in-a-LVM-metadevice as a SLOG device for the 
ZFS zpools on those servers and performance is now really *really* good).
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to