I still believe that a set of compressed incremental star archives give
you
more features.
Big difference there is that in order to create an incremental star archive,
star has to walk the whole filesystem or folder that's getting backed up,
and do a stat on every file to see which files have
Consider then, using a zpool-in-a-file as the file format, rather than
zfs send streams.
That's a pretty cool idea. Then you've still got the entire zfs volume
inside of a file, but you're able to mount and extract individual files if
you want, and you're able to pipe your zfs send directly to
Personally, I like to start with a fresh full image once a month,
and then do daily incrementals for the rest of the month.
This doesn't buy you anything. ZFS isn't like traditional backups.
If you never send another full, then eventually the delta from the original
to the present will
Richard Elling wrote:
Tristan Ball wrote:
Also - Am I right in thinking that if a 4K write is made to a
filesystem block with a recordsize of 8K, then the original block
is read (assuming it's not in the ARC), before the new block is
written elsewhere (the copy, from copy on write)? This
Le 18 janv. 10 à 09:24, Edward Ned Harvey a écrit :
Personally, I like to start with a fresh full image once a month,
and then do daily incrementals for the rest of the month.
This doesn't buy you anything. ZFS isn't like traditional backups.
If you never send another full, then eventually
YMMV. At a recent LOSUG meeting we were told of a case where rsync was
faster than an incremental zfs send/recv. But I think that was for a
mail server with many tiny files (i.e. changed blocks are very easy to
find in files with very few blocks).
However, I don't see why further ZFS
On Mon, Jan 18, 2010 at 10:22 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Jan 17, 2010, at 11:59 AM, Tristan Ball wrote:
Is there a way to check the recordsize of a given file, assuming that the
filesystems recordsize was changed at some point?
I don't know of an easy way to do
Edward Ned Harvey wrote:
Personally, I like to start with a fresh full image once a month,
and then do daily incrementals for the rest of the month.
This doesn't buy you anything. ZFS isn't like traditional backups.
If you never send another full, then eventually the delta from
On Mon, Jan 18, 2010 at 3:59 AM, Phil Harman phil.har...@gmail.com wrote:
YMMV. At a recent LOSUG meeting we were told of a case where rsync was
faster than an incremental zfs send/recv. But I think that was for a mail
server with many tiny files (i.e. changed blocks are very easy to find in
On 17/01/2010 20:34, Bob Friesenhahn wrote:
On Mon, 18 Jan 2010, Tristan Ball wrote:
Is there a way to check the recordsize of a given file, assuming that
the filesystems recordsize was changed at some point?
This would be problematic since a file may consist of different size
records (at
On Mon, Jan 18, 2010 at 03:24:19AM -0500, Edward Ned Harvey wrote:
Unless I am mistaken, I believe, the following is not possible:
On the source, create snapshot 1
Send snapshot 1 to destination
On the source, create snapshot 2
Send incremental, from 1 to 2 to the destination.
On the
On 18/01/2010 08:59, Phil Harman wrote:
YMMV. At a recent LOSUG meeting we were told of a case where rsync was
faster than an incremental zfs send/recv. But I think that was for a
mail server with many tiny files (i.e. changed blocks are very easy to
find in files with very few blocks).
or you might do something like:
http://milek.blogspot.com/2009/12/my-presentation-at-losug.html
however in your case if all your clients are running zfs only
filesystems then relaying just on zfs send|recv might be a good idea.
--
Robert Milkowski
http://milek.blogspot.com
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...
On Jan 18, 2010, at 10:55, Jesus Cea wrote:
zpool and zfs report different free space because zfs takes into
account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a
bit
excessive to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/18/2010 05:11 PM, David Magda wrote:
On Jan 18, 2010, at 10:55, Jesus Cea wrote:
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
Hi,
.. it's hard to beat the convenience of a backup file format, for
all sorts of reasons, including media handling, integration with other
services, and network convenience.
Yes.
Consider then, using a zpool-in-a-file as the file format, rather than
zfs send streams.
This is an
mg == Mike Gerdts mger...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
tb == Thomas Burgess wonsl...@gmail.com writes:
mg Yet it is used in ZFS flash archives on Solaris 10 and are
mg slated for use in the successor to flash archives.
in FLAR, ``if a single
Ext2/3 uses 5% by default for root's usage; 8% under FreeBSD for FFS.
Solaris (10) uses a bit more nuance for its UFS:
That reservation is to preclude users to exhaust diskspace in such a way
that ever root can not login and solve the problem.
No, the reservation in UFS/FFS is to keep the
On Jan 18, 2010, at 10:22 AM, Mr. T Doodle wrote:
I would like some opinions on what people are doing in regards to configuring
ZFS for root/boot drives:
1) If you have onbaord RAID controllers are you using them then creating the
ZFS pool (mirrored from hardware)?
I let ZFS do the
On Jan 18, 2010, at 11:04 AM, Miles Nordin wrote:
...
Another problem is that the snv_112 man page says this:
-8-
The format of the stream is evolving. No backwards com-
patibility is guaranteed. You may not be able to receive
your streams on future versions
On Jan 18, 2010, at 7:55 AM, Jesus Cea wrote:
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
This space is also used for the ZIL.
So in a 2TB Harddisk, the reservation would be 32
On Mon, Jan 18, 2010 at 07:34:51PM +0100, Lassi Tuura wrote:
Consider then, using a zpool-in-a-file as the file format, rather than
zfs send streams.
This is an interesting suggestion :-)
Did I understand you correctly that once a slice is written, zfs
won't rewrite it? In other words,
On Mon, Jan 18, 2010 at 3:49 PM, Richard Elling richard.ell...@gmail.comwrote:
On Jan 18, 2010, at 7:55 AM, Jesus Cea wrote:
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.
This
On 18/01/2010 18:28, Lassi Tuura wrote:
Hi,
Here is the big difference. For a professional backup people still typically
use tapes although tapes have become expensive.
I still believe that a set of compressed incremental star archives give you
more features.
Thanks for your
On Sun, Jan 17, 2010 at 8:14 PM, Richard Elling richard.ell...@gmail.comwrote:
On Jan 16, 2010, at 10:03 PM, Travis Tabbal wrote:
Hmm... got it working after a reboot. Odd that it had problems before
that. I was able to rename the pools and the system seems to be running well
now.
CD wrote:
On 01/18/2010 06:36 PM, Tom Haynes wrote:
CD wrote:
Greetings.
I've go two pools, but can only access one of them from my
linux-machine. Both pools got the same settings and acl.
Both pools has sharenfs=on. Also, every filesystem got
aclinherit=passthrough
NAME PROPERTY VALUE
On 2010-Jan-19 00:26:27 +0800, Jesus Cea j...@jcea.es wrote:
On 01/18/2010 05:11 PM, David Magda wrote:
Ext2/3 uses 5% by default for root's usage; 8% under FreeBSD for FFS.
Solaris (10) uses a bit more nuance for its UFS:
That reservation is to preclude users to exhaust diskspace in such a way
Tim Cook wrote:
On Mon, Jan 18, 2010 at 3:49 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
On Jan 18, 2010, at 7:55 AM, Jesus Cea wrote:
zpool and zfs report different free space because zfs takes into
account
an internal reservation of
On Mon, Jan 18, 2010 at 01:38:16PM -0800, Richard Elling wrote:
The Solaris 10 10/09 zfs(1m) man page says:
The format of the stream is committed. You will be able
to receive your streams on future versions of ZFS.
I'm not sure when that hit snv, but obviously it was
On Mon, Jan 18, 2010 at 03:25:56PM -0800, Erik Trimble wrote:
Hopefully, once BP rewrite materializes (I know, I'm treating this
much to much as a Holy Grail, here to save us from all the ZFS
limitations, but really...), we can implement defragmentation which
will seriously reduce the amount
On Jan 18, 2010, at 3:25 PM, Erik Trimble wrote:
Given my (imperfect) understanding of the internals of ZFS, the non-ZIL
portions of the reserved space are there mostly to insure that there is
sufficient (reasonably) contiguous space for doing COW. Hopefully, once BP
rewrite materializes
Thanks. Newegg shows quite a good customer rating for that drive: 70% rated it
with 5 stars, and 11% with four stars, with 240 ratings.
Seems like some people have complained about them sleeping - presumable to save
power, although others report they don't, so I'll need to look into that more.
Hi all,
I was wondering, when blocks are freed as part COW process are the old blocks
put on the top or bottom of the freeblock list?
The question came about looking a thin provisioning using zfs on top of
dynamically expanding disk images (VDI). If the free blocks are put at the end
free
From the web page it looks like this is a card that goes into the computer
system. That's not very useful for enterprise applications, as they are going
to want to use an external array that can be used by a redundant pair of
servers.
I'm very interested in a cost-effective device that will
On Mon, Jan 18, 2010 at 8:48 PM, Charles Hedrick hedr...@rutgers.eduwrote:
From the web page it looks like this is a card that goes into the computer
system. That's not very useful for enterprise applications, as they are
going to want to use an external array that can be used by a redundant
On Mon, Jan 18, 2010 at 05:52:25PM +1300, Ian Collins wrote:
Is it the parent snapshot for a clone?
I'm almost certain it isn't. I haven't created any clones and none show
in zpool history.
What about snapshot holds? I don't know if (and doubt whether) these
are in S10, but since they
Richard Elling wrote:
On Jan 18, 2010, at 3:25 PM, Erik Trimble wrote:
Given my (imperfect) understanding of the internals of ZFS, the non-ZIL
portions of the reserved space are there mostly to insure that there is
sufficient (reasonably) contiguous space for doing COW. Hopefully, once BP
Daniel Carosone wrote:
On Mon, Jan 18, 2010 at 05:52:25PM +1300, Ian Collins wrote:
Is it the parent snapshot for a clone?
I'm almost certain it isn't. I haven't created any clones and none show
in zpool history.
What about snapshot holds? I don't know if (and doubt
Daniel Carosone wrote:
On Mon, Jan 18, 2010 at 03:25:56PM -0800, Erik Trimble wrote:
Hopefully, once BP rewrite materializes (I know, I'm treating this
much to much as a Holy Grail, here to save us from all the ZFS
limitations, but really...), we can implement defragmentation which
will
On Tue, Jan 19, 2010 at 12:16 AM, Erik Trimble erik.trim...@sun.com wrote:
A poster in another forum mentioned that Seagate (and Hitachi, amongst
others) is now selling something labeled as NearLine SAS storage (e.g.
Seagate's NL35 series).
Is it me, or does this look like nothing more than
Tim Cook wrote:
On Tue, Jan 19, 2010 at 12:16 AM, Erik Trimble erik.trim...@sun.com
mailto:erik.trim...@sun.com wrote:
A poster in another forum mentioned that Seagate (and Hitachi,
amongst others) is now selling something labeled as NearLine SAS
storage (e.g. Seagate's NL35
On Tue, Jan 19, 2010 at 1:06 AM, Erik Trimble erik.trim...@sun.com wrote:
stupid question here: I understand the advantages of dual-porting a drive
with a FC interface, but for SAS, exactly what are the advantages other than
being able to read and write simultaneously (obviously, only from
43 matches
Mail list logo