Re: [linux-lvm] indistinguishable column names ( BA Start Start Start Start )

2023-08-22 Thread Marian Csontos
Hi Roland, is `lvs --reportformat json` or `lvs --nameprefixes` good enough?

On Sat, Aug 19, 2023 at 6:06 PM Roland  wrote:

>  > furthermore, it's a little bit weird that some columns being printed
> by default when using -o, is
>  > there an easier way to remove those besides explictly removing them
> one by one with several -o options ? "-o-opt1,-opt2,..." doesn't work
>
> sorry for this noise, i was too dumb for that , "-o+opt1,opt2
> -o-opt3,opt4" works as desired (as documented in manpage)
>
>  > themselves,and there also seems no way to add separators in between
> (like with printf) for separation/formatting
>
> also sorry for this, as we can use --separator=","   (apparently did
> have too much coffee and had overseen that in the manpage)
>
> the question regarding native column headers/description remains
>
> roland
>
> Am 19.08.23 um 17:20 schrieb Roland:
> > hello,
> >
> > does somebody know how we can have native (i.e. non translated) column
> > field names in first line of output of pvs ?
> >
> > in its current implementation, output is hard to read , difficult to
> > distinguish and also not scriptable/parseable, as whitespace is used
> > for field separator and there are fields which also contain whitespace
> > themselves,and there also seems no way to add separators in between
> > (like with printf) for separation/formatting
> >
> > # pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
> >   PV VGFmt  Attr PSizePFree BA Start Start
> > Start Start
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S   0S
> > 2097152S 1 0
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S   0S
> > 2097152S 1 1
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S 0S 0S 0  5309
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S 0S 0S 0  5587
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S 0S 0S 0  5588
> >
> > i mean like this:
> >
> > # pvs --units s -o+pv_ba_start,seg_start,seg_start_pe,pvseg_start
> >   PV VGFmt  Attr PSizePFree pv_ba_start
> > seg_start seg_start_pe pvseg_start
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S  0S
> > 2097152S1   0
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S  0S
> > 2097152S1   1
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S 0S 0S0
> >5309
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S 0S 0S
> > 05587
> >   /dev/sdb   VGrecycle lvm2 a--  11720982528S0S 0S 0S
> > 05588
> >
> > furthermore, it's a little bit weird that some columns being printed
> > by default when using -o, is there an easier way to remove those
> > besides explictly removing them one by one with several -o options ?
> > "-o-opt1,-opt2,..." doesn't work
> >
> > # pvs --units s -o+pv_ba_start,seg_start,pvseg_start,seg_start_pe
> > -o-vg_name -o-pv_name -o-pv_fmt -o-attr -o -pv_size -o -pv_free
> >   BA Start StartStart Start
> > 0S 2097152S 0 1
> > 0S 2097152S 1 1
> > 0S   0S  5309 0
> > 0S   0S  5587 0
> > 0S   0S  5588 0
> >
> > roland
> >
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Need information

2022-02-22 Thread Marian Csontos
Check `dmsetup udevcookies` output. lvcreate may be waiting for udev
process which timed out - check the syslog. You can forcefully complete a
transaction by `dmsetup udevcomplete `

On Tue, Feb 22, 2022 at 11:06 AM Zdenek Kabelac 
wrote:

> Dne 21. 02. 22 v 12:00 Gk Gk napsal(a):
> > Hi,
> >
> > I work on cloud platforms and linux. I need some guidance with an issue
> > observed with LVM.
> >
> > How to forcefully kill a running lvm command like lvcreate ? I tried
> using
> > kill -9 but it is not killing it. Only a reboot seems to does the trick.
> How
> > to forcefully kill the lvcreate process or for that matter any running
> lvm
> > commands ?
> >
> > Also how to check the progress of an lvm command like lvcreate or
> lvremove ?
>
> Hi
>
> 1. lvm2 should not freeze - it would be some horrible bug.
>
> 2. You can't kill any userspace app which is blocked in an uninterruptible
> kernel function - kill always wait till it gets to signal handler  (i.e.
> you
> are opening suspended device might be one such operation)
>
> 3. You need to tell us more about what you are doing - as without this
> there
> is no way to give meaningful help.
>
>
> Regards
>
> Zdenek
>
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] LVM performance vs direct dm-thin

2022-02-01 Thread Marian Csontos
On Sun, Jan 30, 2022 at 11:17 PM Demi Marie Obenour <
d...@invisiblethingslab.com> wrote:

> On Sun, Jan 30, 2022 at 04:39:30PM -0500, Stuart D. Gathman wrote:
> > Your VM usage is different from ours - you seem to need to clone and
> > activate a VM quickly (like a vps provider might need to do).  We
> > generally have to buy more RAM to add a new VM :-), so performance of
> > creating a new LV is the least of our worries.
>
> To put it mildly, yes :).  Ideally we could get VM boot time down to
> 100ms or lower.
>

Out of curiosity, is snapshot creation the main culprit to boot a VM in
under 100ms? Does Qubes OS use tweaked linux distributions, to achieve the
desired boot time?

Back to business. Perhaps I missed an answer to this question: Are the
Qubes OS VMs throw away?  Throw away in the sense like many containers are
- it's just a runtime which can be "easily" reconstructed. If so, you can
ignore the safety belts and try to squeeze more performance by sacrificing
(meta)data integrity.

And the answer to that question seems to be both Yes and No. Classical pets
vs cattle.

As I understand it, except of the system VMs, there are at least two kinds
of user domains and these have different requirements:

1. few permanent pet VMs (Work, Personal, Banking, ...), in Qubes OS called
AppVMs,
2. and many transient cattle VMs (e.g. for opening an attachment from
email, or browsing web, or batch processing of received files) called
Disposable VMs.

For AppVMs, there are only "few" of those and these are running most of the
time so start time may be less important than data safety. Certainly
creation time is only once in a while operation so I would say use LVM for
these. And where snapshots are not required, use plain linear LVs, one less
thing which could go wrong. However, AppVMs are created from Template VMs,
so snapshots seem to be part of the system. But data may be on linear LVs
anyway as these are not shared and these are the most important part of the
system. And you can still use old style snapshots for backing up the data
(and by backup I mean snapshot, copy, delete snapshot. Not a long term
snapshot. And definitely not multiple snapshots).

Now I realized there is the third kind of user domains - Template VMs.
Similarly to App VM, there are only few of those, and creating them
requires downloading an image, upgrading system on an existing template, or
even installation of the system, so any LVM overhead is insignificant for
these. Use thin volumes.

For the Disposable VMs it is the creation + startup time which matters. Use
whatever is the fastest method. These are created from template VMs too.
What LVM/DM has to offer here is external origin. So the templates
themselves could be managed by LVM, and Qubes OS could use them as external
origin for Disposable VMs using device mapper directly. These could be held
in a disposable thin pool which can be reinitialized from scratch on host
reboot, after a crash, or on a problem with the pool. As a bonus this would
also address the absence of thin pool shrinking.

I wonder if a pool of ready to be used VMs could solve some of the startup
time issues - keep $POOL_SIZE VMs (all using LVM) ready and just inject the
data to one of the VMs when needed and prepare a new one asynchronously. So
you could have to some extent both the quick start and data safety as a
solution for the hypothetical third kind of domains requiring them - e.g. a
Disposable VM spawn to edit a file from a third party - you want to keep
the state on a reboot or a system crash.
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] LVM PV UUID problem

2020-10-12 Thread Marian Csontos

On 10/9/20 5:39 PM, Mark H. Wood wrote:

On Fri, Oct 09, 2020 at 11:18:38AM -0400, Digimer wrote:

On 2020-10-09 10:43 a.m., Zdenek Kabelac wrote:

Dne 09. 10. 20 v 15:12 Digimer napsal(a):

Hi all,

    I'm storing LVM information in a postgres database, and wanted to use
the UUID from the PVs / VGs / LVs as the UUIDs in the database. I
noticed when I tried to do this that postgres complained that the UUID
was not valid. I checked with an online UUID validator
(https://www.freecodeformat.com/validate-uuid-guid.php) and it also
reported as invalid.

Example;


# pvdisplay | grep UUID
    PV UUID   jLkli2-dEXx-5Y8n-pYlw-nCcy-9dFL-3B6jU3


    Is this a known issue?



Hi

At the time of lvm2 devel I believe UUID was just a unique identifier,
later some effort to standardize it came in.

But really you should NOT be using basically internal unique identifiers
in your DB - this are internal to DM/LVM work and might be changed at
any time to something else.

User is supposed to use  'vgname' & 'lvname'  - so there you can put those
valid UUID sequences - although human readable strings are always nicer ;)

Zdenek


The trick is that VG and LV names can change, so I wanted to use the
(so-called) UUID as a way to keep track of a given item through name
changes.

I suppose I'll have to rework to use the internal "UUIDs" as more like
serial numbers instead...


Well, if we are stuck with non-standard "UUID"s, at least they are
meant to be Universally Unique, so they can be treated as unique
opaque string tokens.  Or you might find a library function that can
return the unencoded binary value and you can encode it as you please.
However, the issue of persistence remains.

FWIW I think it's quite reasonable for someone to want immutable
unique identifiers for distinct objects such as LVM PVs, especially
when the "unique identifier" part is already available and quite
visible.


...and then there is lvconvert which takes existing LV and creates a new 
object with the same name, and original LV is used as a "compnent" (e.g. 
raid1 leg, or snapshot origin)


So now there are two objects, one with the same name, and other one - 
the original LV, which one should use the original UUID?


Just consider LVM UUIDs as MUST BE unique IDs.

Or better MUST BE for lvm to work properly, and SHOULD BE in reality (as 
duplicates occur e.g. when SAN snapshots are visible to the system and 
you run into troubles, and so there are filters,...)


-- Martian




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] probable lvm thin_pool exhaustion

2020-03-18 Thread Marian Csontos

On 3/11/20 6:24 PM, mai...@maiski.net wrote:


Hello all,

i am a total newbie besides the general knowledge of lvm.
With this disclaimer written I have the following problem,
which may def need some expert knowledge of lvm, because i couldn't
find solutions online for now :/

I am booting my system (in my case is Qubes, but I suppose that does not 
matter at this point)

and after entering my luks password get to the dracut emergency shell.
"Check for pool qubes-dom/pool00 failed (status:1). Manual repair 
required!"

The only aclive lv is qubes_dom0/swap.
All the others are inactive.

step 1:
lvm vgscan vgchange -ay
lvm lvconvert --repair qubes_dom0/pool00
Result:
using default stripesize 64.00 KiB.
Terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 7212 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed 
(status:1). Manual repair required!


One the first glance this looks like the problem reported in Bug 1763895 
- thin_restore fails with transaction_manager::new_block() couldn't 
allocate new block:


https://bugzilla.redhat.com/show_bug.cgi?id=1763895



step 2:
since i suspect that my lvm is full (though it does mark 15 g as free)


IIUC it is the metada which is full, not the data.
What's the size of the below _tmeta volume?

What's `thin_check --version` and `lvm version` output?

-- Martian



i tried the following changes in the /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize: 
465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to 
extend beyond the 15 G marked as free, since idk)
auto_activation_volume_list = to hold the group, root, pool00, swap and 
a vm that would like to delete to free some space

volume_list = the same as auto_activation_volume_list

and tried step 1 again, did not work, got the same result as above with 
qubes_swap as active only


step 3 tried
lvextend -L+1G qubes_dom0/pool00_tmeta
Result:
metadata reference count differ for block xx, expected 0, but got 1 ...
Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!


Since I do not know my way around lvm, what do you think, would be the 
best way out of this?

Adding another external PV? migrating to a bigger PV?
I did not play with backup or achive out of fear to loose any unbackuped 
data which happens to be a bit :|

Any help will be highly appreciated!

Thanks in advance,
m


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?

2019-12-09 Thread Marian Csontos

On 12/9/19 11:26 AM, Daniel Janzon wrote:





The origin of my problem is indeed the poor performance of RAID5,
which maxes out the single core the driver runs on. But if I accept that
as a given, the next problem is LVM striping. Since I do get 10x better


What stripesize was used for striped LV? IIRC the default is 64k.

IIUC you are serving mostly large files. I have no numbers, and no HW to 
test the hypothesis, but using larger stripesize could help here as this 
would split the load on multiple RAID5 volumes, while not splitting the 
IOs too early into too many small requests.


-- Marian

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Which lvm2 code branches are important?

2019-07-24 Thread Marian Csontos

On 7/11/19 4:25 AM, Gang He wrote:

Hello Marian,

Thank for your reply.
About the detailed changes for V2.03, where do we can find the related 
documents from the external link?


If by that you mean WHATS_NEW{,_DM} files, these were overwritten every 
time by latest 2.02 or 2.03 release.


I have fixed that now, and there are now both files for 2.02 and 2.03 
available on sourceware.org:


ftp://sources.redhat.com/pub/lvm2/WHATS_NEW-2.02
ftp://sources.redhat.com/pub/lvm2/WHATS_NEW_DM-2.02

ftp://sources.redhat.com/pub/lvm2/WHATS_NEW-2.03
ftp://sources.redhat.com/pub/lvm2/WHATS_NEW_DM-2.03

WHATS_NEW is now a link to the newest branch (2.03 ATM):

ftp://sources.redhat.com/pub/lvm2/WHATS_NEW
ftp://sources.redhat.com/pub/lvm2/WHATS_NEW_DM

-- Marian




Thanks
Gang

-Original Message-
From: linux-lvm-boun...@redhat.com [mailto:linux-lvm-boun...@redhat.com] On 
Behalf Of Marian Csontos
Sent: 2019年7月10日 20:48
To: LVM general discussion and development ; Gang He 

Subject: Re: [linux-lvm] Which lvm2 code branches are important?

On 7/10/19 8:42 AM, Gang He wrote:

Hello List,

After you clone the code from git://sourceware.org/git/lvm2.git, you can find 
lots of remote code branches.
But which code branches are important for the third party users/ developer? 
That means we should monitor these code branches.
For example,
Which code branches are main (or long-active) code branches?
Which code branches are used for which lvm2 products (big versions)?


master - 2.03 branch, new features land here,
stable-2.02 - legacy 2.02 branch, bug fixes mostly.


What are the main differences between different lvm2 products? E.g. some 
features are added/removed.


2.03 vs 2.03:

- dropped lvmetad and clvmd,
- added handling of writecache and VDO targets.




Thanks a lot.
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Which lvm2 code branches are important?

2019-07-10 Thread Marian Csontos

On 7/10/19 8:42 AM, Gang He wrote:

Hello List,

After you clone the code from git://sourceware.org/git/lvm2.git, you can find 
lots of remote code branches.
But which code branches are important for the third party users/ developer? 
That means we should monitor these code branches.
For example,
Which code branches are main (or long-active) code branches?
Which code branches are used for which lvm2 products (big versions)?


master - 2.03 branch, new features land here,
stable-2.02 - legacy 2.02 branch, bug fixes mostly.


What are the main differences between different lvm2 products? E.g. some 
features are added/removed.


2.03 vs 2.03:

- dropped lvmetad and clvmd,
- added handling of writecache and VDO targets.




Thanks a lot.
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] "Unknown feature in status" message when running lvs/lvdisplay against cached LVs

2019-07-08 Thread Marian Csontos

On 7/5/19 11:20 AM, Gang He wrote:

Hi Guys,

I uses lvm2-2.02.180, I got an error message when running lvs/lvdisplay against 
cached LVs.
e.g.
linux-v5ay:~ # lvs
   Unknown feature in status: 8 1324/8192 128 341/654272 129 151 225 1644 0 341 
0 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 smq 0 
rw -
   LV   VG Attr   LSize  Pool   Origin   Data%  Meta%  Move Log 
Cpy%Sync Convert
   home system -wi-ao 17.99g
   root system Cwi-aoC--- 20.00g [lvc_root] [root_corig] 0.05   16.16   
0.00
   swap system -wi-ao  2.00g


You need this patch:

https://sourceware.org/git/?p=lvm2.git;a=commit;h=adf9bf80a32500b45b37eb24b98fa7c2c933019e

Kernel introduced a new feature which is not recognized by lvm versions 
released before the kernel (which is anything up to latest 2.02.185). 
This will be included in the next lvm release.


-- Marian



The bug can be reproduced stably with the below steps.
Have a (virtual) machine with 2 disk drives
Install os with LVM as root volume
Then, add the second disk after os is installed.
# pvcreate /dev/sdb
# vgextend system /dev/sdb
# lvcreate --type cache-pool -l 100%FREE -n lvc_root system /dev/sdb
# lvconvert --type cache --cachepool system/lvc_root system/root
then run `lvs` or `lvdisplay` command to trigger the issue


Thanks
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] how to copy a snapshot, or restore snapshot without deleting it

2019-01-03 Thread Marian Csontos

On 1/3/19 5:46 AM, Davis, Matthew wrote:

Hi,

I want to restore a snapshot without deleting the snapshot.


Hi, I think this should be possible using thin snapshots - snapshot the 
snapshot you want to restore, and merge the snapshot - may be not 
perfect, but it is at least possible. This is fast, as there is no need 
to write huge amounts of data, it is just a switching tree's root away 
(or very close to it.)


Perhaps more convenient alternative is to use boom boot manager 
(boom-boot package available on rhel8+ and fedora27+). Not sure it is 
available in ubuntu/debian.


NOTE:

*Thin pools*

When using thin pools make sure you have enough space and you do not run 
out of space in data and metadata devices, or you risk serious trouble.


1. you must enable monitoring and threshold for extending thin pool.
2. using recent lvm2 2.02.* releases is recommended.

*Boom*

Boom also requires a minor change in initramfs to pass `-K` option to 
lvchange to allow activation of volumes with skip activation flag.


-- Martian




My use case is that I'm experimenting with a lot of different drivers, kernel 
modules, and file modifications all over my machine.
I want to
1. take a snapshot of the working system
2. make changes
3. restore the snapshot (` sudo lvconvert --merge /dev/ubuntu-vg/$SNAPSHOT` 
then reboot)
4. make new changes
5. restore to the snapshot again

The problem is that step 3 deletes the snapshot, so step 5 fails.

My current workaround is:
1. take a snapshot of the working system
2. make changes
3. restore the snapshot (` sudo lvconvert --merge /dev/ubuntu-vg/$SNAPSHOT` 
then reboot)
4. Wait 1.5 hours, without making any changes to the machine
5. Take a new snapshot, with the same name as the original
6. make new changes
7. restore to the snapshot

This is not great because:
* I sometimes forget to do step 5
* I can't take a snapshot of the volume while it is still merging. This takes 
1.5 hours. I want to be able to restore my snapshots multiple times per day


Is there a flag I can add to `lvconvert` to make it not delete the snapshot?
Alternatively, is there a way I can make a copy of the snapshot before I 
restore it?

It looks like someone else asked this question 10 years ago.
https://www.redhat.com/archives/linux-lvm/2008-November/msg0.html
Has this problem been solved since then?

Thanks,
Matt Davis

Technical Specialist
Telstra | Product Strategy & Innovation - Telstra Labs | Programmable 
Infrastructure
E  matthew.davi...@team.telstra.com



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] How to upgrade LVM2/CLVM to v2.02.180 from old versions

2018-08-05 Thread Marian Csontos

On 08/03/2018 11:40 AM, Gang He wrote:

Hello List,

I want to upgrade LVM2/CLVM to v2.02.180 from v2.02.120, but there are some 
problems which I want to confirm with you guys.
1) How to migrate /etc/lvm/lvm.conf? since the existing configuration file does 
not include some new attributes,
could you help to figure out which new key attributes, which should be 
considered during the upgrade installation?


Have a look at lvmconfig command.

`lvmconfig --type diff`[1] can help you figure out what you changed.

And `lvmconfig --type new --sinceversion 2.02.120`[2] might be what you 
are looking for.


[1]: Introduced in 2.02.119 so this should work even on your old 
installation.

[2]: New in 2.02.136.



2) lvmpolld is necessary in LVM2/CLVM v2.02.180? I can find the related files 
in the installation file list
/sbin/lvmpolld
/usr/lib/systemd/system/lvm2-lvmpolld.service
/usr/lib/systemd/system/lvm2-lvmpolld.socket
this daemon should be enable by default? if we disable this daemon, LVM2/CLVM 
related features (e.g. pvmove) will be affected, or not?


pvmove should work mostly fine even without lvmpolld, the testsuite runs 
both with and without lvmpolld.


I think the manpage of lvmpolld is good start:

   The  purpose of lvmpolld is to reduce
   the number of spawned background pro‐
   cesses  per  otherwise unique polling
   operation. There should be only  one.
   It also eliminates the possibility of
   unsolicited termination of background
   process by external factors.

Process receiving SIGHUP was one of the "external factors".



3) any other places (e.g. configuration files, binary files, features, etc.), 
which should be considered during the upgrade?


lvm1 and pool format were removed in 2.02.178. There is a branch adding 
them back if you need them - *2018-05-17-put-format1-and-pool-back*. At 
least it does apply almost cleanly :-)


-- Martian




Thanks a lot.
Gang



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] How to get a list of all logical groups or volumes using dbus API

2018-06-21 Thread Marian Csontos

On 06/21/2018 09:26 AM, Michael Lipp wrote:

Hi,

I searched, but I cannot find the "root hook". Using the Manager
interface, I can lookup a LV if I have some id (LookUpByLvmId). But how
can I achieve the same results as vgs/lvs using the dbus API? I *could*
get the list of paths and pattern match them, but IMHO this would be a hack.


I think `object_manager.GetManagedObjects()` is one of supported ways 
(used by the test suite).


Not sure there is simpler way.





  - Michael




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] pvmove does not work at all with version 2.02.177(2)

2018-06-11 Thread Marian Csontos

On 06/11/2018 08:13 AM, Gang He wrote:

Hi Martian,


On 2018/5/30 at 18:37, in message

<2397dd2b-deef-2bf2-47ca-51fb6f880...@redhat.com>, Marian Csontos
 wrote:

On 05/30/2018 11:23 AM, Gang He wrote:

Hello List,

As you know, I ever reported that lvcreate could not create a mirrored LV,

the root cause is a configure building item "--enable-cmirrord" was missed.

Now, I encounter another problem, pvmove does not work at all.
The detailed information/procedure is as below,
sle-nd1:/ # pvs
PV VG  Fmt  Attr PSize   PFree
/dev/sda1  cluster-vg2 lvm2 a--  120.00g 120.00g
/dev/sda2  cluster-vg2 lvm2 a--   30.00g  20.00g
/dev/sdb   cluster-vg2 lvm2 a--   40.00g  30.00g
sle-nd1:/ # vgs
VG  #PV #LV #SN Attr   VSize   VFree
cluster-vg2   3   2   0 wz--nc 189.99g 169.99g
sle-nd1:/ # lvs
LV   VG  Attr   LSize  Pool Origin Data%  Meta%  Move Log

Cpy%Sync Convert

test-lv2 cluster-vg2 -wi-a- 10.00g
sle-nd1:/ # lsblk
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda8:00  160G  0 disk
├─sda1 8:10  120G  0 part
├─sda2 8:20   30G  0 part
└─sda3 8:30   10G  0 part
sdb8:16   0   40G  0 disk
└─cluster--vg2-test--lv2 254:00   10G  0 lvm
vda  253:00   40G  0 disk
├─vda1   253:104G  0 part [SWAP]
├─vda2   253:20 23.6G  0 part /
└─vda3   253:30 12.4G  0 part /home

sle-nd1:/ # pvmove -i 5 -v /dev/sdb /dev/sda1
  Executing: /sbin/modprobe dm-mirror
  Executing: /sbin/modprobe dm-log-userspace
  Wiping internal VG cache
  Wiping cache of LVM-capable devices
  Archiving volume group "cluster-vg2" metadata (seqno 19).
  Creating logical volume pvmove0
  Moving 2560 extents of logical volume cluster-vg2/test-lv2.
Increasing mirror region size from 0to 8.00 KiB
Error locking on node a431232: Device or resource busy
Failed to activate cluster-vg2/test-lv2

sle-nd1:/ # lvm version
LVM version: 2.02.177(2) (2017-12-18)
Library version: 1.03.01 (2017-12-18)
Driver version:  4.37.0
Configuration:   ./configure --host=x86_64-suse-linux-gnu

--build=x86_64-suse-linux-gnu --program-prefix= --disable-dependency-tracking
--prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
--sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include
--libdir=/usr/lib64 --libexecdir=/usr/lib --localstatedir=/var
--sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info
--disable-dependency-tracking --enable-dmeventd --enable-cmdlib
--enable-udev_rules --enable-udev_sync --with-udev-prefix=/usr/
--enable-selinux --enable-pkgconfig --with-usrlibdir=/usr/lib64
--with-usrsbindir=/usr/sbin --with-default-dm-run-dir=/run
--with-tmpfilesdir=/usr/lib/tmpfiles.d --with-thin=internal
--with-device-gid=6 --with-device-mode=0640 --with-device-uid=0
--with-dmeventd-path=/usr/sbin/dmeventd
--with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump
--with-thin-repair=/usr/sbin/thin_repair --enable-applib
--enable-blkid_wiping

--enable-cmdlib --enable-lvmetad --enable-lvmpolld --enable-realtime

--with-default-locking-dir=/run/lock/lvm --with-default-pid-dir=/run
--with-default-run-dir=/run/lvm --with-clvmd=corosync --with-cluster=internal
--enable-cmirrord --enable-lvmlockd-dlm


So, I want to know if this problem is also a configuration problem when

building lvm2? or this problem is caused by the source code?

Hi Gang, it is an issue with the codebase, where exclusive activation
was required where it should not.

You will need to backport some additional patches - see CentOS SRPM. And
I should do the same for Fedora.

Could you help to paste the links, which are related to this back-port?


Is this good enough?

http://vault.centos.org/7.5.1804/os/Source/SPackages/lvm2-2.02.177-4.el7.src.rpm





Thanks a lot.
Gang



-- Martian



Thanks
Gang

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] inconsistency between thin pool metadata mapped_blocks and lvs output

2018-05-11 Thread Marian Csontos

On 05/11/2018 10:21 AM, Joe Thornber wrote:

On Thu, May 10, 2018 at 07:30:09PM +, John Hamilton wrote:

I saw something today that I don't understand and I'm hoping somebody can
help.  We had a ~2.5TB thin pool that was showing 69% data utilization in
lvs:

# lvs -a
   LVVG   Attr   LSize  Pool Origin Data%
Meta%  Move Log Cpy%Sync Convert
   my-pool myvg twi-aotz--  2.44t 69.04  4.90
   [my-pool_tdata] myvg Twi-ao  2.44t
   [my-pool_tmeta] myvg ewi-ao 15.81g


Is this everything? Is this a pool used by docker, which does not (did 
not) use LVM to manage thin-volumes?



However, when I dump the thin pool metadata and look at the mapped_blocks
for the 2 devices in the pool, I can only account for about 950GB.  Here is
the superblock and device entries from the metadata xml.  There are no
other devices listed in the metadata:


   
   

That first device looks like it has about 16GB allocated to it and the
second device about 950GB.  So, I would expect lvs to show somewhere
between 950G-966G Is something wrong, or am I misunderstanding how to read
the metadata dump?  Where is the other 700 or so GB that lvs is showing
used?


The non zero snap_time suggests that you're using snapshots.  I which case it
could just be there is common data shared between volumes that is getting 
counted
more than once.

You can confirm this using the thin_ls tool and specifying a format line that
includes EXCLUSIVE_BLOCKS, or SHARED_BLOCKS.  Lvm doesn't take shared blocks 
into
account because it has to scan all the metadata to calculate what's shared.


LVM just queries DM, and displays whatever that provides. You could see 
that in `dmsetup status` output, there are two pairs of '/' separated 
entries - first is metadata usage (USED_BLOCKS/ALL_BLOCKS), second data 
usage (USED_CHUNKS/ALL_CHUNKS).


So the error lies somewhere between dmsetup and kernel.

What is kernel/lvm version?
Is thin_check_executable configured in lvm.conf?

-- Martian

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] unable to exclude LVs using global_filter

2018-01-02 Thread Marian Csontos

On 01/02/2018 04:35 PM, Gordon Messmer wrote:

On 01/02/2018 03:03 AM, Marian Csontos wrote:
Filters accept any device if any of it's "names" (all symbolic links) 
is matched by an a pattern ("a|.*/|" in your case) and matches no 
previous r pattern


I don't follow you.  The name being processed in that log *does* match a 
previous r pattern.
I do not think that log shows what is processed. LVM displays device 
names according to different option - preferred_names it is.


If lvmetad is in use, which by default it is, run `pvscan - --cache 
$DEV_MAJOR:$DEV_MINOR` - that should be more helpful WRT which devices 
are scanned and accepted as Physical Volumes.




for example anything in /dev/mapper/ is accepted. 


^ THIS! In your case the /dev/mapper/vm_* link IS accepted by the 
"a|.*/|" in global_filter. And /dev/disk/by-*/* too. And maybe others...





Yes, I'd considered that might be an issue, but that's not the block 
device name that the logs indicate is being used.  A path that I've 
specifically rejected is being processed.  If a second path to the block 
device might be processed, then I can see the need to make additional 
changes, but I can't solve that problem without understanding the basic 
logic of the filter system.


The documentation in lvm.conf says "The first regex in the list to match 
the path is used, producing the 'a' or 'r' result for the device." but 
that doesn't seem to be correct.


IMHO the documentation is confusing if not straight incorrect. I hate 
that and IMHO we should fix how the filtering works, but that would be 
an incompatible change, and those are not loved. :-(


How it actually works is: if ANY path is accepted, the device is. In 
upstream the lvm.conf also says (emphasis mine):


"When multiple path names exist for a block device, if *any path* name 
matches an 'a' pattern before an 'r' pattern, then the device is 
accepted. If all the path names match an 'r' pattern first, then the 
device is rejected."


May be we should replace the "device" in the last sentence by "path" so 
it would say:


"The first regex in the list to match the path is used, producing the 
'a' or 'r' result for *that path*."


And:

"When multiple path names exist for a block device"

could be safely discarded - as it is true most of if not all the time 
(or replaced by "Keep in mind devices nowadays are linked from many 
places in the /dev tree, and")


So we would have the following:

"If any path name matches an 'a' pattern before an 'r' pattern, then the 
device is accepted."


I am not a native speaker, but to me that is consistent with current 
behavior which is unlikely to change. I will send a patch to see if it 
is opposed by anyone...


-- Martian







___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] unable to exclude LVs using global_filter

2018-01-02 Thread Marian Csontos

On 01/02/2018 07:47 AM, Gordon Messmer wrote:
I'd like to avoid scanning LVs, and I thought I'd set global_filter 
appropriately.  The following logs show global_filter set to reject 
^/dev/VolGroup/vm_, but the system is scanning 
/dev/VolGroup/vm_wiki_data.  Any hints as to what I'm doing wrong?




Dec 06 13:08:37 kvm-test.private lvm[532]: Setting devices/global_filter 
to global_filter = [ "r|^/dev/VolGroup/vm_|", "r|^/dev/dm-|", "a|.*/|" ]


Hi Gordon, remove the last component in the filter: 'a|.*/|'.

Filters accept any device if any of it's "names" (all symbolic links) is 
matched by an a pattern ("a|.*/|" in your case) and matches no previous 
r pattern - for example anything in /dev/mapper/ is accepted.


When used as a blacklist filter(s) must contain only "r" paterns.

BTW it is usually better to use white list: list "a" patterns for all 
disks you wish to accept and add "r|.*|" at the end.


-- Martian

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] addendum [was: lvmdbusd failure(s) at ubuntu ]

2017-12-11 Thread Marian Csontos

On 12/11/2017 03:27 AM, Oliver Rath wrote:

Hi list,

trying to compile lvm2 with these flags:

./configure --enable-lvmetad --enable-lvmpolld --enable-dmfilemapd
--enable-applib --enable-cmdlib --enable-dbus-service
--enable-notify-dbus --enable-python3_bindings

make
..

make install
..
warning: install_lib: 'build/lib.linux-x86_64-3.6' does not exist -- no
Python modules to install


Oliver Rath -vv
Try being more verbose, please.

Removing all context is not helpful at all.

make install for lvmdbusd should run py-compile script found in autoconf 
directory, which should write bytecode *.py[co] to that directory.


May be:

- it did not run,
- or it used different directory,
- or ...???



What does this mean?


No idea. Not very helpful answer, but if you post full output from 
`configure ...` and `make install`, you might get better one.


-- Martian



Tfh!

Oliver




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] stripped LV with segments vs one segment

2017-04-11 Thread Marian Csontos

On 04/10/2017 04:27 PM, lejeczek wrote:



On 10/04/17 13:27, Marian Csontos wrote:

On 04/10/2017 01:16 PM, lejeczek wrote:



I had 3 stripe LV, you know, three PVs, and wanted the LV to have 4
stripes, wanted to add 4th PV, you can see it from above lvdisplay.


What you want is "reshape" not extend. This was committed as RAID
feature to 2.02.169, but it is still somewhat experimental. You would
need to convert stripe LV to RAID0 (aka takeover) and then reshape.


convert from stipe to raid0 would not preserve stripe sizes? is this
correct?


It should keep stripe size and if it does not it is a bug.


nor it would:

-I/--stripesize not allowed for LV dellH200.InternalB/0 when converting
from striped to raid0.


As it says, the option is simply not allowed. Also it would be 
meaningless - one can not change stripe size while converting stripe to 
raid0. At least not in RHEL-7.3 (lvm2-2.02.166). This is supposed to 
work in upstream/2.02.169, but keep in mind that is a new feature, and 
it is altering data, so better keep a working backup.




  LVM version: 2.02.166(2)-RHEL7 (2016-11-16)
  Library version: 1.02.135-RHEL7 (2016-11-16)
  Driver version:  4.35.0


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] stripped LV with segments vs one segment

2017-04-10 Thread Marian Csontos

On 04/10/2017 01:16 PM, lejeczek wrote:



I had 3 stripe LV, you know, three PVs, and wanted the LV to have 4
stripes, wanted to add 4th PV, you can see it from above lvdisplay.


What you want is "reshape" not extend. This was committed as RAID 
feature to 2.02.169, but it is still somewhat experimental. You would 
need to convert stripe LV to RAID0 (aka takeover) and then reshape.



I tried these and each time it errored:
$ lvextend -v -i 4 -l+100%free dellH200.InternalB/0
$ lvextend -v -i 4 -l+100%pv dellH200.InternalB/0 /dev/sdf
$ lvextend -i 4 -l 100%vg dellH200.InternalB/0



lvextend With -i 4 this would add another striped segment with 4 devices 
after the first segment - and IIUC you do not have enough space for that.




I did have only one segment, an LV spanning 100%vg with 100%each-pv.
And the above is the result of: $ lvextend -i 1 -l +100%free
dellH200.InternalB/0 /dev/sdf

so, either I'm not getting it right or a stripped LV cannot be extended
this way - then: is there performance penalty with segments like above
vs one stripped
segment?






___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Underlying physical volume resized?!

2016-11-17 Thread Marian Csontos

On 11/15/2016 03:27 PM, Gionatan Danti wrote:

Dear all,
I have a question about a warning message LVM is showing. I am using
CentOS 6.8 x86_64. Here is my pvs/vgs/lvs configuration:

pvs:
  PV VG   Fmt  Attr PSize   PFree
  /dev/sda2  vg   lvm2 a--u   9,51g0
  /dev/sda3  vg   lvm2 a--u 110,00g0
  /dev/sdb1  vg   lvm2 a--u 120,00g 4,00g

vgs:
  VG   #PV #LV #SN Attr   VSize   VFree
  vg 3   2   0 wz--n- 239,50g 4,00g

lvs:
  LV  VG   Attr   LSize   Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  lv_root vg   -wi-ao 233,56g
  lv_swap vg   -wi-ao   1,94g

Both at kernel discovery (during boot) and when issuing lvs or other LVM
commands I have the warning:

"Device /dev/sda2 has size of 19945392 sectors which is smaller than
corresponding PV size of 19945472 sectors. Was device resized? One or
more devices used as PVs in VG vg have changed sizes."


Hi, the warning was added only recently - commit c0912af3 added 2016-01-22:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=c0912af3104cb72ea275d90b8b1d68a25a9ca48a

Were the partitions created by anaconda?
This might be an installer bug.

LVM allows overriding PV size to be larger than device size.

More about the feature here: 
https://bugzilla.redhat.com/show_bug.cgi?id=1323950


I do not think pvmove could help here, as there will be read errors on 
the device.


Deactivating swap (`swapoff /dev/$VG/$LV`)
deleting the swap LV (`lvremove $VG/$LV`)
shrinking the PV (simply running `pvresize /dev/sda2` without arguments),
recreating the swap LV (`lvcreate -n $LV -L $SIZE $VG`),
`mkswap /dev/$VG/$LV`
and `swapon /dev/$VG/$LV`
should do.

-- Marian



However, my partitions where never shrunk. fdisk on /dev/sda:

Disk /dev/sda: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders, total 251658240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a3c73

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   *2048 1026047  512000   83  Linux
/dev/sda2 102604820971439 9972696   8e  Linux LVM
/dev/sda320971440   251658239   115343400   8e  Linux LVM

By fdisk output, lvm is right: 20971439-1026048+1=19945392, so sda2 is
80 sectors (40 KB) smaller than lvm expects (19945472 sectors). I
expanded the root volume quite a few times; however, I *never* resized
sda2: at each expansion I took a snapshot of the disk's MBR, so I
already checked that I did not mess with sda2 in the past.

By using lvm metadata archived in /etc/lvm/archive, I think that the
missing 80 sectors are squarely in the swap space (lv_swap, which used
the last physical extents when lv_root was much smaller), but I am
somewhat worried that, given some process write to these "missing" 80
sectors, something bad can happen to the next adjacent physical extent
(where live data are store, as it is part of lv_root now).

In short:

1) can someone point me on what happen here, and why lvs only recently
started to complain?

2) do you think my data are at risk?

3) what I can do to solve the problem? I can think of two different
approach: a) run pvreside /dev/sda2 to shrunk the physical volume or b)
shrunk the swap partition to be sure nobody will ever write to the last
80 sectors.

Thanks.



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] pvmove and filesystem - un/mount?

2016-07-11 Thread Marian Csontos

On 07/08/2016 11:03 AM, lejeczek wrote:



On 07/07/16 19:10, Jarkko Oranen wrote:



On 07/07/16 19:01, lejeczek wrote:

hi users,

must be an easy one - what should be(is allowed to do) filesystem
doing while there is a pvmove(failed hard disk) taking place in
underlying LV?

many thanks,


Hi,

pvmove is generally transparent to the layers above, so you can
continue using the system as usual while a pvmove is in progress.
While I have never seen pvmove actually fail (My desktop encountered a
power failure mid-operation once, and it just continued as if nothing
had happened after rebooting), with a failing hard disk, I would avoid
any additional IO while the operation is in progress.



ok, thanks
maybe a bit more tricky, would now downsizing a FS be one thing OK to
do(probably also fsck) ? while pvmove is working?
furthermore, would lvreduce be OK to take pvmote out of a LV ?

I have a bit of puzzle type of situation here to deal with - started
pvmove but seems that HDD is rather in a bad condition for after two
days pvmove is barely at 4%.
Should I let it finish - or - abort pvmove and try fsck; fsreduce;
lvreduce..
hmmm..


Use ddrescue. pvmove is not intended for data rescuing from failing disks.

ddrescue is using different strategy to quickly read whatever it can, 
quickly skipping over any failing regions, attempting to recover those 
on later iterations.


-- Martian







--
Jarkko

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Unable to find device with UUID XXX.

2016-05-18 Thread Marian Csontos

On 05/18/2016 02:22 AM, Brian McCullough wrote:

I have an issue that I hope that the group can save me from.

I have found the Thin discussions interesting, but I am still doing the
Thick thing.


OK, I had a drive that started to cause problems, would not boot without
fsck, etc., and decided to replace it using pvmove.

I made myself a CD with SystemRescueCD ( sp? ) and booted the system
with that.

I attached the new drive to the system before I booted.  It had no PVs
on it, but a set of partitions ( GPT table ) ready to be assigned.

( skipping a couple of steps )

I ran pvmove for about half of the PVs on the drive that was failing,
sending them to the new drive.  All seemed to be well.

THEN, the data cable on the old drive ( USB 3 ) was disturbed, and
pvmove stopped, complaining that it couldn't see the drive.  OK, I can
fix this.  I plugged the cable back in, but things didn't seem to be any
better.


BTW, pvmove is not a data rescue tool. It does not keep persistent log 
and it is necessary to resume synchronization between drives from 
scratch on reboot.


Use ddrescue to get data from disks with errors.



I stopped the system, and rebooted.

I have rebooted several times, with not completely consistent results.

A couple of times it stopped when it said that it was loading the MD
drivers.


We need to know more to help.

Are you using  md raid? If so is not using mdadm to replce and 
synchronize the drive better?


What distribution, version of LVM and kernel?

`lsblk` output would help us to see what's there.
`blkid` output would help us to see other bits and pieces.
Also `vgs`, `lvs -avo+devices`, `pvs` output, too, please.




When I seem to have got the system booted properly, I do a pvs and it
complains about missing UUIDs.


My guess is some of the PVs (those on the old drive) were marked missing 
and would have to be readded to the VG using `vgextend --restoremissing` 
before you can continue.


Another possibility is, is lvmetad enabled and running?
If so it may be possible, there are some duplicates there, and older 
versions of LVM did not always warn about that.


-- Martian



I did a "pvscan -u" and thought that I was seeing the UUIDs in the
bottom part of the display, after it had complained about the missing
ones, but I think that I was just seeing what it wanted to see, not what
it was actually finding.

At this point, I seem to be dead in the water, and can't resume the
pvmove process.

Can you guide me back to the light, and help me finish what I started to
do, so that I can get this machine back to work?


Thank you,
Brian


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Thin Pool Performance

2016-04-28 Thread Marian Csontos

On 04/20/2016 09:50 PM, shankha wrote:

Chunk size for lvm was 64K.


What's the stripe size?
Does 8 disks in RAID5 mean 7x data + 1x parity?

If so, 64k chunk cannot be aligned with RAID5 stripe size and each write 
is potentially rewriting 2 stripes - rather painful for random writes as 
this means to write 4k of data, 64k are allocated and that requires 2 
stripes - almost twice the amount of written data to pure RAID.


-- Martian



Thanks
Shankha Banerjee


On Wed, Apr 20, 2016 at 11:55 AM, shankha  wrote:

I am sorry. I forgot to post the workload.

The fio benchmark configuration.

[zipf write]
direct=1
rw=randrw
ioengine=libaio
group_reporting
rwmixread=0
bs=4k
iodepth=32
numjobs=8
runtime=3600
random_distribution=zipf:1.8
Thanks
Shankha Banerjee


On Wed, Apr 20, 2016 at 9:34 AM, shankha  wrote:

Hi,
I had just one thin logical volume and running fio benchmarks. I tried
having the metadata on a raid0. There was minimal increase in
performance. I had thin pool zeroing switched on. If I switch off
thin pool zeroing initial allocations were faster but the final
numbers are almost similar. The size of the thin poll metadata LV was
16 GB.
Thanks
Shankha Banerjee


On Tue, Apr 19, 2016 at 4:11 AM, Zdenek Kabelac  wrote:

Dne 19.4.2016 v 03:05 shankha napsal(a):


Hi,
Please allow me to describe our setup.

1) 8 SSDS with a raid5 on top of it. Let us call the raid device :
dev_raid5
2) We create a Volume Group on dev_raid5
3) We create a thin pool occupying 100% of the volume group.

We performed some experiments.

Our random write operations dropped  by half and there was significant
reduction for
other operations(sequential read, sequential write, random reads) as
well compared to native raid5

If you wish I can share the data with you.

We then changed our configuration from one POOL to 4 POOLS and were able
to
get back to 80% of the performance (compared to native raid5).

To us it seems that the lvm metadata operations are the bottleneck.

Do you have any suggestions on how to get back the performance with lvm ?

LVM version: 2.02.130(2)-RHEL7 (2015-12-01)
Library version: 1.02.107-RHEL7 (2015-12-01)




Hi


Thanks for playing with thin-pool, however your report is largely
incomplete.

We do not see you actual VG setup.

Please attach  'vgs/lvs'  i.e. thin-pool zeroing (if you don't need it keep
it disabled), chunk size (use bigger chunks if you do not need snapshots),
number of simultaneously active thin volumes in single thin-pool (running
hundreds of loaded thinLV is going to loose battle on locking) , size of
thin pool metadata LV -  is this LV located on separate device (you should
not use RAID5 with metatadata)
and what kind of workload you try on ?

Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/