I'm trying to do a simple LV move from one PV to another:
# pvmove -i 5 -n root /dev/sdc2 /dev/sdb2
No data to move for centos.
The PVs and LV:
# pvdisplay /dev/sd{b,c}2
--- Physical volume ---
PV Name /dev/sdc2
VG Name centos
PV Size 1.82 TiB / not usable 2.09 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476812
Free PE 121809
Allocated PE 355003
PV UUID kDqPCm-bEoj-osa9-Ayfi-AntE-vP82-cDDQk2
--- Physical volume ---
PV Name /dev/sdb2
VG Name centos
PV Size 110.79 GiB / not usable 4.46 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 28361
Free PE 28361
Allocated PE 0
PV UUID P9OGnK-F0Jj-9u11-yzfo-ZAr3-ii6f-BvUMQx
# lvdisplay /dev/centos/root
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID 7DhZpE-k0CE-nHZ0-yZ1z-whTo-R48L-634Z5e
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2015-12-27 23:35:51 -0500
LV Pool name pool00
LV Status available
# open 1
LV Size 28.81 GiB
Mapped size 16.65%
Current LE 7375
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
# lvs
LV VG Attr LSize Pool Origin
Data% Meta% Move Log Cpy%Sync Convert
pool00 centos twi-aotz-- 1.26t
75.72 35.33
root centos Vwi-aotz-- 28.81g pool00
16.65
root-pre-7.4 centos Vwi---tz-k 1000.00m pool00 root
root-pre-7.7 centos Vwi---tz-k 1.46g pool00 root
root-pre-8.4 centos Vwi-a-tz-- 17.58g pool00 root
98.34
snaptest centos Vwi---tz-k 1.46g pool00 root
Any suggestions why pvmove is not finding anything it can move for the
LV?
I don't want to move the entire PV, just select LVs, to a faster PV.
The PV in this case is an SSD that I was just going to move / /usr and
/var to. Would I get better bang for my buck though by using lvmcache
for the centos VG (where /, /usr and /var are, leaving them on the
spinning rust drive) and gain the benefit of caching other LVs in that
same VG? It would have to be write-through caching though as I don't
have redundant (i.e. mirrored, to mitigate single SSD failure) SSDs or
UPS backup (which I would imagine is necessary for write-back caching).
Is there a way to model the effectiveness of lvmcache? I.e. a tool
that will display read:write ratios for given blocks of data? This
would effectively be the same information that lvmcache would use to
decide what to cache, so I'd imagine it could be possible to gather
this data before committing a device to lvmcache.
Or is it even possible to evaluate cache effectiveness once lvmcache
has been deployed? I could deploy the SSD as an lvmcache device and
then decide on how cachable my data really is and revert back to the
more simple move of /, /usr and /var if the tools show lvmcache not
really providing any benefit.
Cheers,
b.