hello !
i think of using zfs for backup purpose of large binary data files (i.e. vmware
vm`s, oracle database) and want to rsync them in regular interval from other
systems to one central zfs system with compression on.
i`d like to have historical versions and thus want to make a snapshot
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd
to stop responding after 2 hours of running a bittorrent client over
nfs4 from a linux client, causing zfs snapshots to hang and requiring
a hard reboot to get the world back in order?
Thomas
There is no NFS over ZFS issue
Hi,
As part of a disk subsystem upgrade I am thinking of using ZFS but there are
two issues at present
1) The current filesystems are mounted as /hostname/mountpoint except for one
directory where the mount point is storage dir/storage application dir.
Is is possible to mount a ZFS
Couldn't wait for ZFS delegation, so I cobbled something together; see
attachment.
Nico
--
#!/bin/ksh
ARG0=$0
PROG=${0##*/}
OIFS=$IFS
# grep -q rocks, but it lives in xpg4...
OPATH=$PATH
PATH=/usr/xpg4/bin:/bin:/sbin
# Configuration (see usage message below)
#
# This is really based on how a
On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
Couldn't wait for ZFS delegation, so I cobbled something together; see
attachment.
I forgot to slap on the CDDL header...
#!/bin/ksh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common
On Sat, Jun 23, 2007 at 12:31:28PM -0500, Nicolas Williams wrote:
On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
Couldn't wait for ZFS delegation, so I cobbled something together; see
attachment.
I forgot to slap on the CDDL header...
And I forgot to add a -p option
Erik Trimble wrote:
roland wrote:
hello !
i think of using zfs for backup purpose of large binary data files
(i.e. vmware vm`s, oracle database) and want to rsync them in regular
interval from other systems to one central zfs system with compression
on.
i`d like to have historical
roland wrote:
hello !
i think of using zfs for backup purpose of large binary data files (i.e. vmware
vm`s, oracle database) and want to rsync them in regular interval from other
systems to one central zfs system with compression on.
i`d like to have historical versions and thus want to make
So, in your case, you get maximum
space efficiency, where only the new blocks are stored, and the old
blocks simply are referenced.
so - i assume that whenever some block is read from file A and written
unchanged to file B, zfs recognizes this and just creates a new reference to
file A ?
that
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Thomas Garner
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd
to stop responding after 2 hours of running a bittorrent client over
nfs4 from a linux client, causing zfs snapshots to hang and requiring
if i have one large datafile on zfs, make a snapshot from that zfs fs
holding it and then overwrting that file by a newer version with
slight differences inside - what about the real disk consumption on
the zfs side ?
If all the blocks are rewritten, then they're all new blocks as far as
ZFS
Matthew Ahrens wrote:
Erik Trimble wrote:
Under ZFS, any equivalent to 'cp A B' takes up no extra space. The
metadata is updated so that B points to the blocks in A. Should
anyone begin writing to B, only the updated blocks are added on disk,
with the metadata for B now containing the proper
On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
Matthew Ahrens wrote:
Basically, the descriptions of Copy on Write. Or does this apply only
to Snapshots? My original understanding was that CoW applied whenever
you were making a duplicate of an existing file.
CoW happens all the time. If
Oliver Schinagl wrote:
zo basically, what you are saying is that on FBSD there's no performane
issue, whereas on solaris there (can be if write caches aren't enabled)
Solaris plays it safe by default. You can, of course, override that safety.
FreeBSD plays it safe too. It's just that
14 matches
Mail list logo