On Wed, 27 Sep 2023, Jean-Marc Saffroy wrote:
So I prefer to manage available raw (un-encrypted) space with LVM.
Now, I also need to do backups of /home, and that's why I want
snapshots. But that first layer of LVM would only show a snapshot of
an encrypted volume, and the backup job shouldn't
On Mon, 28 Aug 2023, Roska Postit wrote:
After reading your answer more carefully I got the following idea:
How do you see if I boot the system (this is a desktop computer and the old
and the new drive are both NVMe SSD) from USB Linux and then just do a 'dd'
for the entire drive (in block
On Mon, 28 Aug 2023, Phillip Susi wrote:
Why would you use dd/partclone instead of just having LVM move
everything to the new drive on the fly?
Partition the new drive, use pvcreate to initialize the partition as a
pv, vgextend to add the pv to the existing vg, pvmove to evacuate the
logical
On Sun, 27 Aug 2023, Roska Postit wrote:
What is the most proper way to swap my 500GB SSD drive to the bigger 2TB SSD
drive in the following LLVM configuration ?
nvme0n1 259:0 0 465,8G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2 259:2 0
I use a utility that maps bad sectors to files, then move/rename the
files into a bad blocks folder. (Yes, this doesn't work when critical
areas are affected.) If you simply remove the files, then
modern disks will internally remap the sectors when they are written
again - but the quality of
Z> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
environment. There are multiple hosts in the cluster using the same iscsi
connection, and the VMs are running on this environment using thinlv
volumes. But if
Checkout https://github.com/sdgathman/lbatofile
It was written to identify the file affected by a bad block (so it goes
the opposite direction), but the getpvmap() function obtains pe_start and
pe_size plus the list of segments. findlv() goes through the segments to
find the one an absolute
On Sat, 7 May 2022, Alex Lieflander wrote:
I don’t trust the hardware I’m running on very much, but it’s all I have to
work with at the moment; it’s important that the array is resilient to *any*
(and multiple) single chunk corruptions because such corruptions are likely to
happen in the
On Fri, 6 May 2022, Alex Lieflander wrote:
Thanks. I really don’t want to give up the DM-Integrity management. Less
complexity is just a bonus.
What are you trying to get out of RAID6? If redundancy and integrity
are already managed at another layer, then just use RAID0 for striping.
I
On Sun, 2022-01-30 at 11:45 -0500, Demi Marie Obenour wrote:
> On Sun, Jan 30, 2022 at 11:52:52AM +0100, Zdenek Kabelac wrote:
> >
>
> > Since you mentioned ZFS - you might want focus on using 'ZFS-only'
> > solution.
> > Combining ZFS or Btrfs with lvm2 is always going to be a painful
> > way
On Mon, 3 Jan 2022, Roland wrote:
any chance to get retrieve this information for automated/script based
processing ?
You might find this script enlightening:
https://github.com/sdgathman/lbatofile
It maps bad sectors to partition,LV,file,etc
The relevant function for your question is
On Tue, 28 Dec 2021, Tomas Dalebjörk wrote:
Yes, it is an incremental backup based of the cow device
I've used such a COW based backup (can't remember the name just now, currently
using DRBD and rsync for incremental mirrors). The way it worked was to
read and interpret the raw COW device
If you want to give it a try, just create a snapshot on a specific device
And change all the blocks on the origin, there you are, you now have a cow
device containing all data needed.
How to move this snapshot device to another server, reattach it to an empty
lv volume as a snapshot.
lvconvert
On Tue, 17 Aug 2021, Chethan Seshadri wrote:
Can someone help to covert an offset within an lvol to the corresponding
pvol offset using...
1. lvm commands
2. lvmdbusd APIs
This utility does that:
https://github.com/sdgathman/lbatofile
See getlv() and getpvmap()
On Mon, 28 Jun 2021, heming.z...@suse.com wrote:
In my opinion, the using style of btrfs by many users are same as ext4/xfs.
Yes. I like the checksums in metadata feature for enhanced integrity
checking.
It seems too complicated to have anytime soon - but when a filesystem
detects
On Tue, 15 Sep 2020, Tomas Dalebjörk wrote:
ok, lets say that I have 10 LV on a server, and want create a thin lv
snapshot every hour and keep that for 30 days that would be 24h *
30days * 10lv = 720 lv
if I want to keep snapshot copies from more nodes, to serve a single
repository of
On Sat, 22 Aug 2020, L A Walsh wrote:
I am trying to create a new pv/vg/+lvs setup but am getting some weird messages
pvcreate -M2 --pvmetadatacopies 2 /dev/sda1
Failed to clear hint file.
WARNING: PV /dev/sdd1 in VG Backup is using an old PV header, modify
the VG to update.
Physical
On Wed, 15 Apr 2020, Shock Media B.V. support wrote:
We use an mdadm raid-config consisting of 4 or more SSD's/Disks where
we use part of the disks for a raid1,raid10 or raid5. We create
volumes on 2 nodes and use DRBD to keep these 2 volumes in sync and we
run a virtual machine (using KVM) on
On Sat, 22 Feb 2020, Eric Toombs wrote:
Snapshot creation is already pretty fast:
$ time sudo lvcreate --size 512M --snapshot --name snap /dev/testdbs/template
Logical volume "snap" created.
0.03user 0.05system 0:00.46elapsed 18%CPU (0avgtext+0avgdata 28916maxresident)k
On Mon, 27 Jan 2020, Matthias Leopold wrote:
I consciously used "pvmove --abort" for the first time now and I'm astonished
it doesn't behave like described in the man page. No matter if I've used
"--atomic" for the original command, when I interrupt the process with
"pvmove --abort" lvm
?
--
Stuart D. Gathman
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
___
linux-lvm mailing list
linux-lvm@redhat.com
ht
.
No worse than raid5. In fact, better because the 2nd fault always
kills the raid5, but only has a 33% or less chance of killing the
raid10. (And in either case, it is usually just specific sectors,
not the entire drive, and other manual recovery techniques can come into
play.)
--
Stuart D
drops to a
magnitude below. The reason I am looking for a striped setup is to
The mdadm layer already does the striping. So doing it again in the LVM
layer completely screws it up. You want plain JBOD (Just a Bunch
Of Disks).
--
Stuart D. Gathman
"Confutatis maledictis, fl
available as a last resort would ratchet up the
reliability of thin pools.
--
Stuart D. Gathman
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
_
, which would be store with the cow.
But none of those tools currently exist.
--
Stuart D. Gathman
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial._
requires marking the root node deleted - no need to write all the
leaf nodes.
--
Stuart D. Gathman
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from he
, keeping a count.
(IMO it should also log the block offset so that I can occasionally check
that the out of sync occurred in an expected volume.)
--
Stuart D. Gathman
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do
.
--
Stuart D. Gathman
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
___
linux-lvm mailing list
linux-lvm@redhat.com
ht
filesystem on a 4096 device.
--
Stuart D. Gathman
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
___
linux-lvm mailing list
. SSD drives have a "sector" size
of 128k or 256k - the erase block, and performance improves when aligned
to that.
--
Stuart D. Gathman
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from he
It's not very elegant, but the quick and dirty solution is to use sudo
to allow certain users to run specific commands with a real uid of
root. You can say exactly what arguments the user has to use - the
sudoers file is where this is configured. Or you can make a script -
which is probably
of a failure). Been using this configuration
in my 5 drive QNAP NAS's for along time.
Yep. Not talking about raid1+0
Linux raid10 really ought to be a "standard" - and effectively is.
I use it whenever I can (with only 2 disks I use raid1 so I can alias
the legs as non-raid).
--
there was a flag on the LV to ensure it remained a single segment.
--
Stuart D. Gathman <stu...@gathman.org>
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go f
delay boot for a large CoW.
For the common purpose of temporary snapsnots for consistent backups,
this is not an issue.
--
Stuart D. Gathman <stu...@gathman.org>
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where d
is that *all* the volumes in the same thin-pool would
have to be frozen when running out of extents, as writes all pull from
the same pool of physical extents.
--
Stuart D. Gathman <stu...@gathman.org>
"Confutatis maledictis, flamis acribus addictis" - background song
D. Gathman <stu...@gathman.org>
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
___
linux-lvm mailing list
linux-lvm@redhat.com
http
36 matches
Mail list logo