Il 2023-04-09 14:21 Arvid Picciani ha scritto:
Hi,
doing some performance tests i noticed that lvmraid + integrity +
thinpool outperforms zfs z1 by 5x while offering the same features.
(snapshots, integrity)
Is this somehow unsafe or how come it is so unpopular?
lvcreate --type raid1 --mirrors
Il 2023-03-02 19:33 Roger Heflin ha scritto:
On Thu, Mar 2, 2023 at 11:44 AM Gionatan Danti
wrote:
It is a 100G cache over 16TB, so even if it flushes in order the may
not be that close to each other (1 in 160).
Yes, but destaging in LBA order (albeit far apart) is much better than
in
Il 2023-03-02 01:51 Roger Heflin ha scritto:
A spinning raid6 array is slow on writes (see raid6 write penalty).
Because of that the array can only do about 100 write operattions/sec.
True. But does flushing cached data really proceed in random LBA order
(as seen by HDDs), rather than trying
Il 2022-11-16 11:50 Zdenek Kabelac ha scritto:
Well - as said - vg on vg is basically equivalent of the original LV
on top of VDO manager.
Hi Zdenek,
it seems clunkier to manage two nested VG/LV if you ask me.
But still these fast snapshot do not solve you a problem of double
out-of-space fau
Il 2022-11-15 22:56 Zdenek Kabelac ha scritto:
You could try 'vg' on top of another 'vg' - however I'd not recommend
to use it this way (and it's unssuported (&unsupportable) by lvm2 in
general)
Hi Zdenek,
yeah, I would strongly avoid that outside lab testing.
IMHO I'd not recommend to combi
Dear all,
as previous vdo utils are gone in RHEL9, VDO volumes must be created via
lvm-vdo and associated lvm commands.
What is not clear to me is how to combine lvm-vdo with lvmthin (for fast
CoW snapshots) and/or lvmcache (for SSD caching of an HDD pool). For
example, with the old /dev/mapp
Il 2022-09-27 12:10 Roberto Fastec ha scritto:
questions
1. Given the premise 3. The corresponding LVM2 metadata/tables are and
will be just a (allow me the term) "grid" "mapping that space" in an
ordered sequence to in the subsequent use (and filling) of the RAID
space "just mark" the used ones
Il 2022-08-17 20:54 Zdenek Kabelac ha scritto:
https://github.com/prajnoha/sid
Thanks for sharing. From the linked page:
"SID positions itself on top of udev, reacting to uevents. It is closely
interlinked and cooperating with udev daemon. The udev daemon is
enhanced with specialized sid ude
Il 2022-08-17 17:26 Zdenek Kabelac ha scritto:
I like the general idea of the udev watch. It is the magic that causes
newly created partitions to magically appear in the system, which is
Would disabling the watch rule be a reasonable approach in this case? If
the user want to scan a new device
Il 2022-06-16 18:19 Demi Marie Obenour ha scritto:
Also heavy fragmentation can make journal replay very slow, to the
point
of taking days on spinning hard drives. Dave Chinner explains this
here:
https://lore.kernel.org/linux-xfs/20220509230918.gp1098...@dread.disaster.area/.
Thanks, the li
Il 2022-06-16 09:53 Demi Marie Obenour ha scritto:
That seems reasonable. My conclusion is that dm-thin (which is what
LVM
uses) is not a good fit for workloads with a lot of small random writes
and frequent snapshots, due to the 64k minimum chunk size. This also
explains why dm-thin does not
Il 2022-06-15 11:46 Zhiyong Ye ha scritto:
I also think it meets expectations. But is there any other way to
optimize snapshot performance at the code level? Does it help to
reduce the chunksize size in the code, I see in the help documentation
that the chunksize can only be 64k minimum.
I don'
Il 2022-06-15 09:42 Zhiyong Ye ha scritto:
I regenerated the thin volume with the chunksize of 64K and the random
write performance data tested with fio 64k requests is as follows:
caseiops
thin lv 9381
snapshotted thin lv 8307
As expected, increasing I/O
Il 2022-06-14 15:29 Zhiyong Ye ha scritto:
The reason for this may be that when the volume creates a snapshot,
each write to an existing block will cause a COW (Copy-on-write), and
the COW is a copy of the entire data block in chunksize, for example,
when the chunksize is 64k, even if only 4k of
Il 2022-06-14 12:16 Zhiyong Ye ha scritto:
After creating the PV and VG based on the iSCSI device, I created the
thin pool as follows:
lvcreate -n pool -L 1000G test-vg
lvcreate -n poolmeta -L 100G test-vg
lvconvert --type thin-pool --chunksize 64k --poolmetadata
test-vg/poolmeta test-vg/pool
lvc
Il 2022-06-13 10:49 Zhiyong Ye ha scritto:
The performance degradation after snapshotting is expected as writing
to a snapshotted lv involving reading the original data, writing it
elsewhere and then writing new data into the original chunk. But the
performance loss was so much more than I expect
Il 2022-03-07 12:09 Gaikwad, Hemant ha scritto:
Hi,
We have been looking at LVM as an option for long term backup using
the LVM snapshots. After looking at the various forums and also
looking at a few LVM defects, realized that LVM could be a very good
option for short term backup, but might res
Il 2022-01-31 16:28 Demi Marie Obenour ha scritto:
thin_trim is a userspace tool that works on an entire thin pool, and I
suspect it may be significantly faster than blkdiscard of an individual
thin volume. That said, what I would *really* like is something
equivalent to fstrim for thin volumes:
Il 2022-01-31 14:41 Zdenek Kabelac ha scritto:
> 'issue_discard' relates only to the internal lvm2 logic when some
extents become free for reuse (so i.e. after
'lvremove/lvreduce/vgremove...'.
However since with thin volumes no physical extents of VG are released
(as the thin volume is releasin
Il 2022-01-29 18:45 Demi Marie Obenour ha scritto:
Is it possible to configure LVM2 so that it runs thin_trim before it
activates a thin pool? Qubes OS currently runs blkdiscard on every
thin
volume before deleting it, which is slow and unreliable. Would running
thin_trim during system startu
Il 2022-01-30 22:17 Demi Marie Obenour ha scritto:
On Xen, the paravirtualised block backend driver (blkback) requires a
block device, so file-based virtual disks are implemented with a loop
device managed by the toolstack. Suggestions for improving this
less-than-satisfactory situation are welc
Il 2022-01-30 22:39 Stuart D. Gathman ha scritto:
I use LVM as flexible partitions (i.e. only classic LVs, no thin pool).
Classic LVs perform like partitions, literally using the same driver
(device mapper) with a small number of extents, and are if anything
more recoverable than partition tables
Il 2022-01-30 12:18 Zdenek Kabelac ha scritto:
> Thin is more oriented towards extreme speed.
VDO is more about 'compression & deduplication' - so space efficiency.
Combining both together is kind of harming their advantages.
Unfortunately, it is the only (current) solution to have snapshotti
Il 2022-01-30 18:43 Zdenek Kabelac ha scritto:
Chain filesystem->block_layer->filesystem->block_layer is something
you most likely do not want to use for any well performing solution...
But it's ok for testing...
I second that.
Demi Marie - just a question: are you sure do you really needs a b
Il 2021-09-02 05:26 Yu, Mingli ha scritto:
Per
https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/doc/RelNotes/v1.46.4.txt
[1], after e2fsprogs upgrades to 1.46.4, the defaults for mke2fs now
call for 256 byte inodes for all file systems (with the exception of
file systems for the GNU Hurd
Il 2021-08-10 10:40 Ming-Hung Tsai ha scritto:
It depends on the number of thin volumes and the space utilization. A
selective dump might also help, e.g.,
# thin_dump --dev-id --dev-id ...
The thin-ids could be obtained from lvs, using listing:
# lvs -o lv_name,vg_name,thin_id
or querying:
#
Il 2021-08-09 05:53 Ming-Hung Tsai ha scritto:
It sounds like you intend to keep snapshots that have been updated
(written) since its creation, right?
True.
The precise way might be checking the data mappings via thin_dump. An
updated device has data mappings with timestamps greater than the
Dear all,
as you know, a thin snapshot does have the "k" (skip activation) flag
set, so one has to force activation by ignoring the flag (or removing
the flag itself).
I wonder: can we detect if a volume/snapshot was *ever* activated? My
reasoning is that a never-activated snapshot surely did
Il 2021-06-29 01:00 Chris Murphy ha scritto:
Pretty sure it's fixed since 4.14.
https://lkml.org/lkml/2019/2/10/23
Hi Chris, the headline states "harden against duplicate fsid". Does it
means that the issue is "only" less likely or it was really solved?
It's not inherently slow, it's a trac
Il 2021-06-28 05:28 Stuart D Gathman ha scritto:
Yes. I like the checksums in metadata feature for enhanced integrity
checking.
Until recently btrfs has issue when a LVM snapshot was mounted. It is
now solved?
It seems too complicated to have anytime soon - but when a filesystem
detects co
Il 2020-11-29 16:12 Sreyan Chakravarty ha scritto:
What about a swap file allocated with dd??
If thinpool metadata can be paged out (which I don't know for sure),
swap on thinvol can potentially deadlock.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assy
Il 2020-11-29 01:18 Chris Murphy ha scritto:
What about a swapfile (on ext4 or XFS) onr a thin volume? In this
case, I'd expect fallocate would set the LE to PE mapping, and it
should work. But does it work for both paging and hibernation files?
If things did not change, fallocate does *not* al
Il 2020-11-21 04:10 Sreyan Chakravarty ha scritto:
I mean what is the point of creating a snapshot if I can't change my
original volume ?
Is there some sort of resolution ?
External thin snapshot are useful to share a common, read-only base (ie:
a "gold-master" image) with different writable
Il 2020-09-17 21:27 Zdenek Kabelac ha scritto:
You've most likely found the bug and this should be likely disable
(and enabled only with some force option).
Hi Zdenek, I am not sure about what bug I found - can you be more
explicit?
Problem is, when such device stack is used for XFS - where
Il 2020-09-15 20:34 Zdenek Kabelac ha scritto:
Dne 14. 09. 20 v 23:44 Gionatan Danti napsal(a):
Hi all,
I am testing lvmcache with VDO and I have issue with devices block
size.
The big & slow VDO device is on top of a 4-disk MD RAID 10 device
(itself on top of dm-integrity). Over the
Il 2020-09-15 23:47 Zdenek Kabelac ha scritto:
You likely don't need such amount of 'snapshots' and you will need to
implement something to remove snapshot without need, so i.e. after a
day you will keep maybe 'every-4-hour' and after couple days maybe
only a day-level snapshot. After a month per
Il 2020-09-15 23:30 Stuart D Gathman ha scritto:
My feeling is that btrfs is a better solution for the hourly snapshots.
(Unless you are testing a filesystem :-)
For fileserver duty, sure - btrfs is adequate.
For storing VMs and/or databases - no way, thinvol is much faster
Side note: many btr
Hi all,
I am testing lvmcache with VDO and I have issue with devices block size.
The big & slow VDO device is on top of a 4-disk MD RAID 10 device
(itself on top of dm-integrity). Over the VDO device I created a
thinpool and a thinvol [1]. When adding the cache device to the volume
group via v
Il 2020-09-09 21:53 John Stoffel ha scritto:
Very true, numbers talk, annecdotes walk...
Sure - lets try to gather some numbers from the data you posted
before...
sudo lvcache status data/home
+---+--+
| Field | Value|
+---
Il 2020-09-09 21:41 Roy Sigurd Karlsbakk ha scritto:
First, filelevel is usually useless. Say you have 50 VMs with Windows
server something. A lot of them are bound to have a ton of equal
If you look at IOPS instead of just sequencial speed, you'll see the
difference. A set of 10 drives in a RAID
Il 2020-09-09 20:47 John Stoffel ha scritto:
This assumes you're tiering whole files, not at the per-block level
though, right?
The tiered approach I developed and maintained in the past, yes. For any
LVM-based tiering, we are speaking about block-level tiering (as LVM
itself has no "files" c
Il 2020-09-09 17:01 Roy Sigurd Karlsbakk ha scritto:
First, filelevel is usually useless. Say you have 50 VMs with Windows
server something. A lot of them are bound to have a ton of equal
storage in the same areas, but the file size and content will vary
over time. With blocklevel tiering, that c
Il 2020-09-02 20:38 Roy Sigurd Karlsbakk ha scritto:
Hi all
I just wonder how it could be possible some day, some year, to make
lvm use tiering. I guess this has been debated numerous times before
and I found this lvmts project, but it hasn't been updated for eight
years or so.
Hi, having deve
Il 2020-08-30 21:30 Zdenek Kabelac ha scritto:
Hi
Lvm2 has only ascii metadata (so basically what is stored in
/etc/lvm/archive is the same as in PV header metadata area -
just without spaces and some comments)
And while this is great for manual recovery, it's not
very efficient in storing larg
Il 2020-08-30 19:33 Zdenek Kabelac ha scritto:
For illustration for 12.000 LVs you need ~4MiB just store Ascii
metadata itself, and you need metadata space for keeping at least 2 of
them.
Hi Zdenek, are you speaking of classical LVM metadata, right?
Handling of operations like 'vgremove' wi
Il 2020-07-14 18:05 David Teigland ha scritto:
On Mon, Jul 13, 2020 at 04:34:52PM +0200, Janne Heß wrote:
However some of my systems are single-disk systems. For those, RAIDs
are
not possible so I was thinking if LVM has some support for single-PV
setups with parity on the same PV.
Hi,
We di
On 7/13/20 4:34 PM, Janne Heß wrote:
Hello everyone,
I'm currently testing dm-integrity and its use with LVM.
For RAID 1,5,6 LVM should just be able to recover the RAID when integrity fails
(and the block device returns a read error).
However some of my systems are single-disk systems. For thos
Il 2020-06-23 23:02 Zdenek Kabelac ha scritto:
Hi
ATM skilled admin can always easily enforce:
'dmsetup remove --force vg-lv'
Hi Zdenek,
sure, but I find messing with dmsetup more error prone than using an LVM
command.
for i.e. linear devices to achieve this goal - however resolving thi
Il 2020-06-23 22:28 Zdenek Kabelac ha scritto:
Note - you cannot 'remove' mappings 'in-use' (aka open count of a
device
is higher then 0 - see 'dmsetup info -c' output for this).
However you can replace such mapping with 'error' target - so the
underlaying device is relaxed - although we do no
Il 2020-05-28 09:35 lampahome ha scritto:
I create a vg1 on one SSD and create a lv1 in vg1.
Then I run:
sudo thin_check /dev/mapper/vg1-lv1
It shows:
examining superblock
superblock is corrupt
bad checksum in superblock
Can someone teach me how to fix corrupted superblock?
Hi, thin_che
Il 2020-03-24 16:09 Zdenek Kabelac ha scritto:
In past we had problem that when users have been using huge chunk size,
and small 'migration_threashold' - the cache was unable to demote
chunks
from the cache to the origin device (the size of 'required' data for
demotion were bigger then what ha
Il 2020-03-24 10:43 Zdenek Kabelac ha scritto:
By default we require migration threshold to be at least 8 chunks big.
So with big chunks like 2MiB in size - gives you 16MiBof required I/O
threshold.
So if you do i.e. read 4K from disk - it may cause i/o load of 2MiB
chunk block promotion into
Il 2020-02-22 12:58 Eric Toombs ha scritto:
So, is there a sort of "dumber" way of making these snapshots, maybe by
changing the allocation algorithm or something?
Hi, I think that total snapshot creation time is dominated by LVM
flushing its (meta)data to the physical disks. Two things to try
Il 2020-02-15 21:49 Chris Murphy ha scritto:
Are you referring to this known problem?
https://btrfs.wiki.kernel.org/index.php/Gotchas#Block-level_copies_of_devices
Yes.
By default the snapshot LV isn't active, so the problem doesn't
happen. I've taken many LVM thinp snapshots of Btrfs file sy
Il 2020-02-15 21:19 Zdenek Kabelac ha scritto:
IMHO ZFS is 'somewhat' slow to play with...
and I've no idea how ZFS can resolve all correctness issues in
kernel...
Zdenek
Oh, it surely does *not* solve all correctness issues. Rather, having
much simpler constrains (and use cases), it simply
Il 2020-02-15 13:40 Zdenek Kabelac ha scritto:
Dne 14. 02. 20 v 21:40 David Teigland napsal(a):
On Fri, Feb 14, 2020 at 08:34:19PM +0100, Gionatan Danti wrote:
Hi David, being filters one of the most asked questions, can I ask
why we
have so many different filters, leading to such complex
Il 2020-02-14 21:40 David Teigland ha scritto:
You're right, filters are difficult to understand and use correctly.
The
complexity and confusion in the code is no better. With the removal of
lvmetad in 2.03 versions (e.g. RHEL8) there's no difference between
filter
and global_filter, so that'
Il 2020-02-14 20:11 David Teigland ha scritto:
Hi, it looks like a bug led to an incorrect filter configuration
actually
working for a period of time. When the bug was later fixed, the
incorrect
filter became apparent. In summary, the correct way to exclude devs
from
lvmetad (and to handle du
Il 20-01-2020 15:40 Zdenek Kabelac ha scritto:
Yep - kernel metadata 'per thin LV' are reasonably small - so even for
big thin devices it still should fit within your time boundaries.
(effectively thin snapshot just increases 'mapping' sharing between
origin and its snapshot - so the time needed
Il 20-01-2020 10:22 Zdenek Kabelac ha scritto:
So having thousands of LVs in a single VG will become probably your
bottleneck.
Hi Zdenek, I was thinking more about having few LVs, but with different
amount of data/mapping.
For example, is a very fragmented volume (ie: one written randomical
Hi list,
just for confirmation: is the time needed to take a snapshot constant,
or does it depend on how many chunks are mapped in/by the specific thin
pool and volume? In other workds, is snapshot create time O(1) or O(n)?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyo
On 14/01/20 10:17, Zdenek Kabelac wrote:
Hi
You can't use lvm2 on 'raw' device - we do require PV headers to stay
correct.
BUT
Clearly you can use 'dm-cache' target with 'dmsetup' utility and
maintain the correctness of devices & activation yourself.
See: linux/Documentation/admin-guide/d
Hi all,
I have a question: can lvmcache be used to cache non-lvm source (ie: a
raw block device)?
From the lvmcache man page I would say no; however, a direct
confirmation will be helpful.
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it -
On 13/01/20 15:49, Zdenek Kabelac wrote:
Hi
Well the size is 'almost' 16GiB - and when the size of thin-pools
metadata is always maintained by lvm2 - it's OK - the size is
internally 'clamped' correctly - the problem is when you use this size
'externally' - so you make 16GiB regular LV used
On 12/01/20 19:11, Zdenek Kabelac wrote:
With 16G there is 'problem' (not yet resolved known issue) with
different max size used by thin_repair (15.875G) & lvm2 (15.8125G) tools.
If you want to go with current max size supported by lvm2 - use the
value -L16192M.
Hi Zdenek,
just for confirmat
On 09/12/19 11:26, Daniel Janzon wrote:
Exactly. The md driver executes on a single core, but with a bunch of RAID5s
I can distribute the load over many cores. That's also why I cannot join the
bunch of RAID5's with a RAID0 (as someone suggested) because then again
all data is pulled through a si
Il 08-12-2019 00:14 Stuart D. Gathman ha scritto:
On Sat, 7 Dec 2019, John Stoffel wrote:
The biggest harm to performance here is really the RAID5, and if you
can instead move to RAID 10 (mirror then stripe across mirrors) then
you should be a performance boost.
Yeah, That's what I do. RAID1
Il 23-10-2019 17:37 Zdenek Kabelac ha scritto:
Hi
If you use 1MiB chunksize for thin-pool and you use 'dd' with proper
bs size
and you write 'aligned' on 1MiB boundary (be sure you user directIO,
so you are not a victim of some page cache flushing...) - there should
not be any useless read.
On 23/10/19 15:05, Zdenek Kabelac wrote:
Yep - we are recommending to disable zeroing as soon as chunksize >512K.
But for 'security' reason the option it's up to users to select what
fits the needs in the best way - there is no 'one solution fits them
all' in this case.
Sure, but again: if
On 23/10/19 14:59, Zdenek Kabelac wrote:
Dne 23. 10. 19 v 13:08 Gionatan Danti napsal(a):
Talking about thin snapshot, an obvious performance optimization which
seems to not be implemented is to skip reading source data when
overwriting in larger-than-chunksize blocks.
Hi
There is no such
On 23/10/19 12:46, Zdenek Kabelac wrote:
Just few 'comments' - it's not really comparable - the efficiency of
thin-pool metadata outperforms old snapshot in BIG way (there is no
point to talk about snapshots that takes just couple of MiB)
Yes, this matches my experience.
There is also BIG dif
Il 23-10-2019 00:53 Stuart D. Gathman ha scritto:
If you can find all the leaf nodes belonging to the root (in my btree
database they are marked with the root id and can be found by
sequential
scan of the volume), then reconstructing the btree data is
straightforward - even in place.
I remembe
Hi,
Il 22-10-2019 18:15 Stuart D. Gathman ha scritto:
"Old" snapshots are exactly as efficient as thin when there is exactly
one. They only get inefficient with multiple snapshots. On the other
hand, thin volumes are as inefficient as an old LV with one snapshot.
An old LV is as efficient, and
Hi all,
I have a (virtual) block devices which does not expose any io scheduler:
[root@localhost block]# cat /sys/block/zd0/queue/scheduler
none
I created an lvm volume on top of that block devices with:
[root@localhost ~]# pvcreate /dev/zd0
Physical volume "/dev/zd0" successfully created.
[roo
Il 23-08-2019 14:47 Zdenek Kabelac ha scritto:
Ok - serious disk error might lead to eventually irrepairable metadata
content - since if you lose some root b-tree node sequence it might be
really hard
to get something sensible (it's the reason why the metadata should be
located
on some 'mirror
Il 31-07-2019 12:16 Zdenek Kabelac ha scritto:
When it appears to work on a system with a single disk really doesn't
make it 'clearly working' - we are providing solution for thousands
disks based heavily loaded servers as well, so there is not much wish
to 'hack-in' occasionally working fix.
N
Il 13-06-2019 18:05 Ilia Zykov ha scritto:
Hello.
Tell me please, how can I get the maximum address used by a virtual
disk
(disk created with -V VirtualSize). I have several large virtual disks,
but they use only a small part at the beginning of the disk. For
example:
# lvs
LV VG
Il 03-06-2019 15:23 Joe Thornber ha scritto:
On Fri, May 31, 2019 at 03:13:41PM +0200, Gionatan Danti wrote:
- does standard lvmthin support something similar? If not, how do you
see a
zero coalesce/compression/trim/whatever feature?
There isn't such a feature as yet.
Ok, so th
Hi all,
doing some tests on a 4-bays, entry-level NAS/SAN system, I discovered
it is entirely based on lvm thin volumes.
On configuring what it calls "thick volumes" it create a new thin
logical volume and pre-allocates all space inside the new volume.
What surprised me is the speed at which
Il 13-05-2019 10:26 Zdenek Kabelac ha scritto:
Hi
There is no technical problem to enable caching of cached volume (aka
convert cache_cdata LV into another 'cached' volume.
And as long as there are not errors anywhere - it works.
Difficulty comes with solving error cases - and that's the main re
Il 15-12-2018 18:59 Giuseppe Vacanti ha scritto:
- pvscan does not report /dev/sdc
- pvdisplay does not seem to know about this PV
pvdisplay /dev/sdc
Failed to find physical volume "/dev/sdc"
Can you show the output of "lsblk" and "pvscan -vvv" ?
Thanks.
--
Danti Gionatan
Supporto Tecnico
On 30/11/2018 10:52, Zdenek Kabelac wrote:
Hi
The name of i/o layer bcache is only internal to lvm2 code for caching
reads form disks during disk processing - the name comes from usage
of bTree and caching - thus the name bcache.
It's not a dm target - so nothing you could use for LVs.
And ha
Hi list,
in BZ 1643651 I read:
"In 7.6 (and 8.0), lvm began using a new i/o layer (bcache)
to read and write data blocks."
Last time I checked, bcache was a completely different caching layer,
unrelated from LVM. The above quote, instead, implies that bcache is now
actively used by LVM.
Am I
Il 19-10-2018 15:08 Zdenek Kabelac ha scritto:
Hi
It's rather about different workload takes benefit from different
caching approaches.
If your system is heavy on writes - dm-writecache is what you want,
if you mostly reads - dm-cache will win.
That's why there is dmstats to also help identi
On 19/10/2018 12:58, Zdenek Kabelac wrote:
Hi
Writecache simply doesn't care about caching your reads at all.
Your RAM with it's page caching mechanism keeps read data as long as
there is free RAM for this - the less RAM goes to page cache - less read
operations remains cached.
Hi, does it m
On 19/10/2018 11:12, Zdenek Kabelac wrote:
And final note - there is upcoming support for accelerating writes with
new dm-writecache target.
Hi, should not it be already possible with current dm-cache and
writeback caching?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.as
Il 15-07-2018 21:47 Zdenek Kabelac ha scritto:
Hi Zdenek,
Hi
Try to open BZ like i.e. this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1532071
this is quite scary, especially considering no updates on the ticket in
recent months. How did the OP solve the issue?
Add all possible detai
Il 25-06-2018 19:20 Ryan Launchbury ha scritto:
Hi Gionatan,
The system with the issue is with writeback cache mode enabled.
Best regards,
Ryan
Ah, I was under the impression that it was a writethough cache.
Sorry for the noise.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyo
Il 24-06-2018 21:18 Ryan Launchbury ha scritto:
In testing, forcibly removing the cache, via editing the LVM config
file has caused extensive XFS filesystem corruption, even when backing
up the metadata first and restoring after the cache device is missing.
Any advice on how to safely uncache the
Il 22-06-2018 22:13 Zdenek Kabelac ha scritto:
Addressing is internally limited to use lower amount of bits.
Usage of memory resources, efficiency.
ATM we do not recommend to use cache with more then 1.000.000 chunks
for better efficiency reasons although on bigger machines bigger
amount of ch
Il 22-06-2018 22:07 Zdenek Kabelac ha scritto:
When cache will experience write error - it will become invalidate and
will
need to be dropped - but this thing is not automated ATM - so admin
works is needed to handle this task.
So, if a writethrough cache experience write errors but the
admin
Il 20-06-2018 12:15 Zdenek Kabelac ha scritto:
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Is that true even when using a writethrough cache mode?
--
Danti Gionatan
Suppo
Hi list,
I wonder if a method exists to have a >16 GB thin metadata volume.
When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a
single thin pool. The obvious solution is to increase the chunk size, as
128 KB chunks are good for over 30 TB, and so on. However, increasing
chu
On 27/03/2018 12:58, Gionatan Danti wrote:
Mmm no, I am caring for the couple MBs themselves. I was concerned about
the possibility to get a full metadata device by writing far less data
than expected. But I now get the point.
Sorry, I really meant "I am NOT caring for the coupl
On 27/03/2018 12:39, Zdenek Kabelac wrote:
Hi
I've forget to mention there is "thin_ls" tool (comes with
device-mapper-persistent-data package (with thin_check) - for those who
want to know precise amount of allocation and what amount of blocks is
owned exclusively by a single thinLV and wh
On 27/03/2018 12:18, Zdenek Kabelac wrote:
Tool for size estimation is giving some 'rough' first guess/first choice
number.
The metadata usage is based in real-word data manipulation - so while
it's relatively easy to 'cup' a single thin LV metadata usage - once
there is a lot of sharing bet
On 27/03/2018 10:30, Zdenek Kabelac wrote:
Hi
Well just for the 1st. look - 116MB for metadata for 7.21TB is *VERY*
small size. I'm not sure what is the data 'chunk-size' - but you will
need to extend pool's metadata sooner or later considerably - I'd
suggest at least 2-4GB for this data si
Hi all,
I can't wrap my head on the following reported data vs metadata usage
before/after a snapshot deletion.
System is an updated CentOS 7.4 x64
BEFORE SNAP DEL:
[root@ ~]# lvs
LV VG Attr LSize Pool Origin Data%
Meta% Move Log Cpy%Sync Convert
000-Thi
On 05/03/2018 11:18, Zdenek Kabelac wrote:
Yes - it has been updated/improved/fixed - and I've already given you a
link where you can configure the behavior of XFS when i.e. device
reports ENOSPC to the filesystem.
Sure - I already studied it months ago during my testing. I simply was
under
Il 04-03-2018 21:53 Zdenek Kabelac ha scritto:
On the other hand all common filesystem in linux were always written
to work on a device where the space is simply always there. So all
core algorithms simple never counted with something like
'thin-provisioning' - this is almost 'fine' since thin-pr
1 - 100 of 197 matches
Mail list logo