Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
support (I believe currently only Nexenta but correct me if I am wrong) the
blocks will not be freed, will they?
Solaris 11.1 has ZFS with SCSI UNMAP support.
Freeing unused blocks works perfectly well with fstrim
On Fri, Sep 21, 2012 at 6:31 AM, andy thomas a...@time-domain.co.uk wrote:
I have a ZFS filseystem and create weekly snapshots over a period of 5 weeks
called week01, week02, week03, week04 and week05 respectively. Ny question
is: how do the snapshots relate to each other - does week03 contain
I asked what I thought was a simple question but most of the answers don't
have too much to do with the question.
Hehe, welcome to mailing lists ;).
What I'd
really like is an option (maybe it exists) in ZFS to say when a block fails
a checksum tell me which file it affects
It does exactly
On Wed, Aug 29, 2012 at 8:58 PM, Timothy Coalson tsc...@mst.edu wrote:
As I understand it, the used space of a snapshot does not include anything
that is in more than one snapshot.
True. It shows the amount that would be freed if you destroyed the
snapshot right away. Data held onto by more
Have you not seen my answer?
http://mail.opensolaris.org/pipermail/zfs-discuss/2012-August/052170.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Unfortunately, the Intel 520 does *not* power protect it's
on-board volatile cache (unlike the Intel 320/710 SSD).
Intel has an eye-opening technology brief, describing the
benefits of power-loss data protection at:
On Sat, Aug 4, 2012 at 12:00 AM, Burt Hailey bhai...@triunesystems.com wrote:
We do hourly snapshots. Two days ago I deleted 100GB of
data and did not see a corresponding increase in snapshot sizes. I’m new to
zfs and am reading the zfs admin handbook but I wanted to post this to get
some
2) in the mirror case the write speed is cut by half, and the read
speed is the same as a single disk. I'd expect about twice the
performance for both reading and writing, maybe a bit less, but
definitely more than measured.
I wouldn't expect mirrored read to be faster than single-disk read,
It is normal for reads from mirrors to be faster than for a single disk
because reads can be scheduled from either disk, with different I/Os being
handled in parallel.
That assumes that there *are* outstanding requests to be scheduled in
parallel, which would only happen with multiple readers
Actually, a write to memory for a memory mapped file is more similar to
write(2). If two programs have the same file mapped then the effect on the
memory they share is instantaneous because it is the same physical memory.
A mmapped file becomes shared memory as soon as it is mapped at least
I really makes no sense at all to
have munmap(2) not imply msync(3C).
Why not? munmap(2) does basically the equivalent of write(2). In the
case of write, that is: a later read from the same location will see
the written data, unless another write happens in-between. If power
goes down following
when you say remove the device, I assume you mean simply make it unavailable
for import (I can't remove it from the vdev).
Yes, that's what I meant.
root@openindiana-01:/mnt# zpool import -d /dev/lofi
pool: ZP-8T-RZ1-01
id: 9952605666247778346
state: FAULTED
status: One or more
Two questions from a newbie.
1/ What REFER mean in zfs list ?
The amount of data that is reachable from the file system root. It's
just what I would call the contents of the file system.
2/ How can I known the size of all snapshot size for a partition ?
(OK I can add
Can I say
USED-REFER=snapshot size ?
No. USED is the space that would be freed if you destroyed the
snapshot _right now_. This can change (and usually does) if you
destroy previous snapshots.
___
zfs-discuss mailing list
I saw one team revert from ZoL (CentOS 6) back to ext on some backup servers
for an application project, the killer was
stat times (find running slow etc.), perhaps more layer 2 cache could have
solved the problem, but it was easier to deploy ext/lvm2.
But stat times (think directory
After having read this mailing list for a little while, I get the
impression that there are at least some people who regularly
experience on-disk corruption that ZFS should be able to report and
handle. I’ve been running a raidz1 on three 1TB consumer disks for
approx. 2 years now (about 90%
The issue is definitely not specific to ZFS. For example, the whole OS
depends on relable memory content in order to function. Likewise, no one
likes it if characters mysteriously change in their word processing
documents.
I don’t care too much if a single document gets corrupted – there’ll
Inspired by the paper End-to-end Data Integrity for File Systems: A
ZFS Case Study [1], I've been thinking if it is possible to devise a way,
in which a minimal in-memory data corruption would cause massive data
loss. I could imagine a scenario where an entire directory branch
drops off the tree
18 matches
Mail list logo