I've seen similar error messages from a script I've written, as well. Mine
does create a lock file and won't run if a `zfs send` is already in progress.
My only guess is that the second (or third, or...) filesystem starts sending to
the receiving host before the latter has fully finished the
On 03/10/12 02:48 AM, Cameron Hanover wrote:
On Mar 6, 2012, at 8:26 AM, Carsten John wrote:
Hello everybody,
I set up a script to replicate all zfs filesystems (some 300 user home directories in
this case) within a given pool to a mirror machine. The basic idea is to send
the snapshots
Hello everybody,
I set up a script to replicate all zfs filesystems (some 300 user home
directories in this case) within a given pool to a mirror machine. The basic
idea is to send the snapshots incremental if the corresponding snapshot exists
on the remote side or send a complete snapshot if
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Carsten John
I set up a script to replicate all zfs filesystems (some 300 user home
directories in this case) within a given pool to a mirror machine. The
basic
idea is to send the snapshots
On Tue, Mar 6, 2012 at 10:19 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Carsten John
snip
cannot receive new filesystem stream: dataset is busy or
cannot
I think altering the amount of copies method would work best for us. The
hold feature could also work, but seems like it might be more complicated
as there will be a large number of snapshots in between the two we are
sending.
I am going to try and implement this keep change and see if it does the
2011-11-05 2:12, HUGE | David Stahl wrote:
Our problem is that we need to use the -R to snapshot and send all
the child zvols, yet since we have a lot of data (3.5 TB), the hourly
snapshots are cleaned on the sending side, and breaks the script as it
is running.
In recent OpenSolaris and
Hi,
I am having some problems with architecting a zfs snapshot replication
scheme that would suit the needs of my company.
Presently we do hour/daily/weekly snapshots of our file server. This file
system is organized in parent/child/child type zvols. so think
*pool/zvol1/zvol2/zvol3,
The only way you will know of decrypting and decompressing causes a
problem in that case is if you try it on your systems. I seriously
doubt it will be unless the system is already heavily CPU bound and
your
backup window is already very tight.
That is true.
My understanding of the
On 07/27/11 10:24, Fred Liu wrote:
The alternative is to have the node in your NDMP network that does the
writing to the tape to do the compression and encryption of the data
stream before putting it on the tape.
I see. T1C is a monster to have if possible ;-).
And doing the job on NDMP
On Tue, Jul 26, 2011 at 03:28:10AM -0700, Fred Liu wrote:
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
Even the data compressed/encrypted by ZFS will be decrypted? If it is true,
will it be
On 07/27/11 12:51, Pawel Jakub Dawidek wrote:
On Tue, Jul 26, 2011 at 03:28:10AM -0700, Fred Liu wrote:
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
Even the data compressed/encrypted by ZFS will be
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 07/26/11 10:14, Andrew Gabriel wrote:
Does anyone know if it's OK to do zfs send/receive between zpools with
different ashift values?
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
The ashift is a vdev
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
Even the data compressed/encrypted by ZFS will be decrypted? If it is true,
will it be any CPU overhead?
And ZFS send/receive tunneled by ssh becomes the
On 07/26/11 11:28, Fred Liu wrote:
The ZFS Send stream is at the DMU layer at this layer the data is
uncompress and decrypted - ie exactly how the application wants it.
Even the data compressed/encrypted by ZFS will be decrypted?
Yes, which is exactly what I said.
All data as seen by the
Yes, which is exactly what I said.
All data as seen by the DMU is decrypted and decompressed, the DMU
layer
is what the ZPL layer is built ontop of so it has to be that way.
Understand. Thank you. ;-)
There is always some overhead for doing a decryption and decompression,
the
Op 26-07-11 12:56, Fred Liu schreef:
Any alternatives, if you don't mind? ;-)
vpn's, openssl piped over netcat, a password-protected zip file,... ;)
ssh would be the most practical, probably.
--
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means,
On 07/26/11 11:56, Fred Liu wrote:
It is up to how big the delta is. It does matter if the data backup can not
be finished within the required backup window when people use zfs send/receive
to do the mass data backup.
The only way you will know of decrypting and decompressing causes a
On 04/ 6/11 07:14 PM, Brandon High wrote:
On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org
mailto:j...@netmusician.org wrote:
How about getting a little more crazy... What if this entire
server temporarily hosting this data was a VM guest running ZFS? I
don't
Thanks for all of this info guys, I'm still digesting it...
My source computer is running Solaris 10 ZFS version 15. Does this
mean that I'd be asking for trouble doing a zfs send back to this
machine from any other ZFS machine running a version 15? I just
want to
On Thu, Apr 7, 2011 at 4:01 PM, Joe Auty j...@netmusician.org wrote:
My source computer is running Solaris 10 ZFS version 15. Does this mean that
I'd be asking for trouble doing a zfs send back to this machine from any
other ZFS machine running a version 15? I just want to make sure I
Hello,
I'm debating an OS change and also thinking about my options
for data migration to my next server, whether it is on new or
the same hardware.
Migrating to a new machine I understand is a simple matter of
ZFS
On Tue, April 5, 2011 14:38, Joe Auty wrote:
Migrating to a new machine I understand is a simple matter of ZFS
send/receive, but reformatting the existing drives to host my existing
data is an area I'd like to learn a little more about. In the past I've
asked about this and was told that it
On Wed, April 6, 2011 10:51, David Dyer-Bennet wrote:
I'm a big fan of rsync, in cronjobs or wherever. What it won't do is
properly preserve ZFS ACLs, and ZFS snapshots, though. I moved from using
rsync to using zfs send/receive for my backup scheme at home, and had
considerable trouble
On Wed, Apr 6, 2011 at 10:51 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Tue, April 5, 2011 14:38, Joe Auty wrote:
Also, more generally, is ZFS send/receive mature enough that when you do
data migrations you don't stress about this? Piece of cake? The
difficulty of this whole undertaking
On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org wrote:
How about getting a little more crazy... What if this entire server
temporarily hosting this data was a VM guest running ZFS? I don't foresee
this being a problem either, but with so
The only thing to watch out for is to
On Wed, Apr 6, 2011 at 1:14 PM, Brandon High bh...@freaks.com wrote:
The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you can't downgrade a dataset, using snv_151a
On 04/ 6/11 11:42 AM, Paul Kraus wrote:
On Wed, Apr 6, 2011 at 1:14 PM, Brandon Highbh...@freaks.com wrote:
The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you
On Wed, Apr 6, 2011 at 10:42 AM, Paul Kraus pk1...@gmail.com wrote:
I thought I saw that with zpool 10 (or was it 15) the zfs send
format had been committed and you *could* send/recv between different
version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:
There is still a
Hello,
I'm debating an OS change and also thinking about my options
for data migration to my next server, whether it is on new or
the same hardware.
Migrating to a new machine I understand is a simple matter of
ZFS
On Dec 9, 2010, at 3:31 PM, Moazam Raja wrote:
Hi all, from much of the documentation I've seen, the advice is to set
readonly=on on volumes on the receiving side during send/receive
operations. Is this still a requirement?
I've been trying the send/receive while NOT setting the receiver to
Hi all, from much of the documentation I've seen, the advice is to set
readonly=on on volumes on the receiving side during send/receive
operations. Is this still a requirement?
I've been trying the send/receive while NOT setting the receiver to
readonly and haven't seen any problems even though
On 12/10/10 12:31 PM, Moazam Raja wrote:
Hi all, from much of the documentation I've seen, the advice is to set
readonly=on on volumes on the receiving side during send/receive
operations. Is this still a requirement?
I've been trying the send/receive while NOT setting the receiver to
readonly
On Thu, Dec 9, 2010 at 5:31 PM, Ian Collins i...@ianshome.com wrote:
On 12/10/10 12:31 PM, Moazam Raja wrote:
So, is it OK to send/recv while having the receive volume write enabled?
A write can fail if a filesystem is unmounted for update.
True, but ZFS recv will not normally unmount a
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson don.jack...@gmail.com wrote:
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv npool/openbsd
receiving full stream of naspool/open...@xfer-11292010 into
npool/open...@xfer-11292010
received 23.5GB stream in 883 seconds (27.3MB/sec)
Try using the -d option to zfs receive. The ability
to do zfs send
-R ... | zfs receive [without -d] was added
relatively recently, and
you may be encountering a bug that is specific to
receiving a send of
a whole pool.
I just tried this, didn't work, new error:
# zfs send -R
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don Jackson
# zfs send -R naspool/open...@xfer-11292010 | zfs recv -d
npool/openbsd
cannot receive new filesystem stream: out of space
The destination pool is much larger (by
Hi Don,
I'm no snapshot expert but I think you will have to remove the previous
receiving side snapshots, at least.
I created a file system hierarchy that includes a lower-level snapshot,
created a recursive snapshot of that hierarchy and sent it over to
a backup pool. Then, did the same steps
Hello,
I am attempting to move a bunch of zfs filesystems from one pool to another.
Mostly this is working fine, but one collection of file systems is causing me
problems, and repeated re-reading of man zfs and the ZFS Administrators Guide
is not helping. I would really appreciate some
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don Jackson
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv
npool/openbsd
receiving full stream of naspool/open...@xfer-11292010 into
npool/open...@xfer-11292010 received
Here is some more info on my system:
This machine is running Solaris 10 U9, with all the patches as of 11/10/2010.
The source zpool I am attempting to transfer from was originally created on a
older OpenSolaris (specifically Nevada) release, I think it was 111.
I did a zpool export on that
Casper Dik wrote on 2010-09-26:
A incremental backup:
zfs snapshot -r exp...@backup-2010-07-13
zfs send -R -I exp...@backup-2010-07-12 exp...@backup-2010-07-13 |
zfs receive -v -u -d -F portable/export
Unfortunately zfs receive -F does not skip existing snapshots
The problem is not with how the replication is done. The locking happens
during the basic zfs operations.
We noticed:
on server2 (which is quite busy serving maildirs) we did
zfs create tank/newfs
rsync 4GB from someotherserver to /tank/newfs
zfs destroy tank/newfs
Destroying newfs took more
Sorry i'm not able to provide more insight but I thought some of the
concepts in this article might help you, as well as Mike's replication
script, also available on this page:
http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zfs
You also might want to look at InfraGeeks
Hello,
We are trying to setup a pair of ZFS file servers, each backing-up data from
another.
The simplified setup is as follows:
server1
tank/prod/web
tank/backup/mail
server2
tank/prod/mail
tank/backup/web
server1:tank/prod/web is a test setup with 10GB of data for 60 websites.
hi all
I'm using a custom snaopshot scheme which snapshots every hour, day, week and
month, rotating 24h, 7d, 4w and so on. What would be the best way to zfs
send/receive these things? I'm a little confused about how this works for delta
udpates...
Vennlige hilsener / Best regards
roy
--
Suppose I have a fileserver, which may be zpool 10, 14, or 15. No
compression, no dedup.
Suppose I have a backupserver. I want to zfs send from the fileserver to
the backupserver, and I want the backupserver to receive and store
compressed and/or dedup'd. The backupserver can be a more
On 07/ 9/10 09:21 AM, Edward Ned Harvey wrote:
Suppose I have a fileserver, which may be zpool 10, 14, or 15. No
compression, no dedup.
Suppose I have a backupserver. I want to zfs send from the fileserver
to the backupserver, and I want the backupserver to receive and store
compressed
On Thu, Jul 8, 2010 at 2:21 PM, Edward Ned Harvey solar...@nedharvey.com
wrote:
Can I zfs send from the fileserver to the backupserver and expect it to
be
compressed and/or dedup'd upon receive? Does zfs send preserve the
properties of the originating filesystem? Will the zfs receive clobber
On 07/ 9/10 10:59 AM, Brandon High wrote:
Personally, I've started organizing datasets in a hierarchy, setting
the properties that I want for descendant datasets at a level where it
will apply to everything that I want to get it. So if you have your
source at tank/export/foo and your
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toyama Shunji
Certainly I feel it is difficult, but is it logically impossible to
write a filter program to do that, with reasonable memory use?
Good question. I don't know the answer.
If
My inclination, based on what I've read and heard from others, is to say
no.
But again, the best way to find out is to write the code. :\
On Wed, Jun 9, 2010 at 11:45, Edward Ned Harvey solar...@nedharvey.comwrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
Can I extract one or more specific files from zfs snapshot stream?
Without restoring full file system.
Like ufs based 'restore' tool.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, June 7, 2010 10:34, Toyama Shunji wrote:
Can I extract one or more specific files from zfs snapshot stream?
Without restoring full file system.
Like ufs based 'restore' tool.
No.
(Check the archives of zfs-discuss for more details. Send/recv has been
discussed at length many times.)
Hi Toyama,
You cannot restore an individual file from a snapshot stream like
the ufsrestore command. If you have snapshots stored on your
system, you might be able to access them from the .zfs/snapshot
directory. See below.
Thanks,
Cindy
% rm reallyimportantfile
% cd .zfs/snapshot
% cd
Thank you David,
Thank you Cindy,
Certainly I feel it is difficult, but is it logically impossible to write a
filter program to do that, with reasonable memory use?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
To answer the question you asked here...the answer is no. There have been
MANY discussions of this in the past. Here's the lng thread I started
back
in May about backup strategies for ZFS pools and file systems:
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038678.html
But
I am trying to duplicate a filesystem from one zpool to another zpool. I
don't care so much about snapshots on the destination side...I am more
trying to duplicate how RSYNC would copy a filesystem, and then only copy
incrementals from the source side to the destination side in subsequent runs
If you see the workload on the wire go through regular patterns of fast/slow
response
then there are some additional tricks that can be applied to increase the
overall
throughput and smooth the jaggies. But that is fodder for another post...
Can you pls. elaborate on what can be done here as I
On Apr 1, 2010, at 12:43 AM, tomwaters wrote:
If you see the workload on the wire go through regular patterns of fast/slow
response
then there are some additional tricks that can be applied to increase the
overall
throughput and smooth the jaggies. But that is fodder for another post...
On 25 mars 2010, at 22:00, Bruno Sousa bso...@epinfante.com wrote:
Hi,
Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the
system i have now.
Regarding the performance...let's assume that a bonnie++ benchmark
could go to 200 mg/s in. The possibility of getting the same
Hi,
I think that in this case the cpu is not the bottleneck, since i'm not
using ssh.
However my 1gb network link probably is the bottleneck.
Bruno
On 26-3-2010 9:25, Erik Ableson wrote:
On 25 mars 2010, at 22:00, Bruno Sousa bso...@epinfante.com wrote:
Hi,
Indeed the 3 disks per vdev
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
not that much.
Now i will play with link aggregation and see how it goes, and of course
i'm counting that incremental replication will be slower...but since the
amount of data would be much less probably it will still
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not
that much.
That is about right. IIRC, the theoretical max is about 4% improvement, for
MTU of 8KB.
Now i will play with link aggregation and see how it goes,
Hi all,
The more readings i do about ZFS, and experiments the more i like this
stack of technologies.
Since we all like to see real figures in real environments , i might as
well share some of my numbers ..
The replication has been achieved with the zfs send / zfs receive but
piped with mbuffer
Thanks for the tip..btw is there any advantage with jbod vs simple volumes?
Bruno
On 25-3-2010 21:08, Richard Jahnel wrote:
BTW, if you download the solaris drivers for the 52445 from adaptec, you can
use jbod instead of simple volumes.
smime.p7s
Description: S/MIME Cryptographic
On 03/26/10 08:47 AM, Bruno Sousa wrote:
Hi all,
The more readings i do about ZFS, and experiments the more i like this
stack of technologies.
Since we all like to see real figures in real environments , i might
as well share some of my numbers ..
The replication has been achieved with the
Hi,
Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the
system i have now.
Regarding the performance...let's assume that a bonnie++ benchmark could
go to 200 mg/s in. The possibility of getting the same values (or near)
in a zfs send / zfs receive is just a matter of putting ,
On 03/26/10 10:00 AM, Bruno Sousa wrote:
[Boy top-posting sure mucks up threads!]
Hi,
Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the
system i have now.
Regarding the performance...let's assume that a bonnie++ benchmark
could go to 200 mg/s in. The possibility of
I am trying to coordinate properties and data between 2 file servers.
on file server 1 I have:
zfs get all zfs52/export/os/sles10sp2
NAME PROPERTY VALUE
SOURCE
zfs52/export/os/sles10sp2 type filesystem
Hi Bruno,
I've tried to reproduce this panic you are seeing. However, I had
difficulty following your procedure. See below:
On 02/08/10 15:37, Bruno Damour wrote:
On 02/ 8/10 06:38 PM, Lori Alt wrote:
Can you please send a complete list of the actions taken: The
commands you used to
On 02/ 8/10 06:38 PM, Lori Alt wrote:
Can you please send a complete list of the actions taken: The
commands you used to create the send stream, the commands used to
receive the stream. Also the output of `zfs list -t all` on both the
sending and receiving sides. If you were able to
Just an observation: panic occurs in avl_add when called from
find_ds_by_guid that tries to add existing snapshot id to the avl tree
(http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/dmu_send.c#find_ds_by_guid).
HTH,
Andrey
On Tue, Feb 9, 2010 at 1:37 AM, Bruno
copied from opensolaris-dicuss as this probably belongs here.
I kept on trying to migrate my pool with children (see previous threads) and
had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.
Here is the log in /var/adm/messages
Feb 8 16:07:09 amber
Can you please send a complete list of the actions taken: The commands
you used to create the send stream, the commands used to receive the
stream. Also the output of `zfs list -t all` on both the sending and
receiving sides. If you were able to collect a core dump (it should be
in
Lori Alt wrote:
Can you please send a complete list of the actions taken: The commands
you used to create the send stream, the commands used to receive the
stream. Also the output of `zfs list -t all` on both the sending and
receiving sides. If you were able to collect a core dump (it
On 21/01/2010 11:55, Julian Regel wrote:
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on licensing here as you need
a one client license per x4540 but in fact can backup data from
uep,
This solution seems like the best and most efficient way of handling large
filesystems. My biggest question however is, when backing this up to tape, can
it be split across several tapes? I will be using bacula to back this up. Will
i need to tar or star this filesystem before writing it
On Thu, Jan 21, 2010 at 11:28 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Jan 21, 2010, at 3:55 AM, Julian Regel wrote:
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on
On Wed, Jan 20, 2010 at 08:11:27AM +1300, Ian Collins wrote:
True, but I wonder how viable its future is. One of my clients
requires 17 LT04 types for a full backup, which cost more and takes
up more space than the equivalent in removable hard drives.
What kind of removable hard drives are
On Thu, Jan 21, 2010 at 12:38:56AM +0100, Ragnar Sundblad wrote:
On 21 jan 2010, at 00.20, Al Hopper wrote:
I remember for about 5 years ago (before LT0-4 days) that streaming
tape drives would go to great lengths to ensure that the drive kept
streaming - because it took so much time to
A Darren Dunham wrote:
On Wed, Jan 20, 2010 at 08:11:27AM +1300, Ian Collins wrote:
True, but I wonder how viable its future is. One of my clients
requires 17 LT04 types for a full backup, which cost more and takes
up more space than the equivalent in removable hard drives.
What kind
On 20/01/2010 15:45, David Dyer-Bennet wrote:
On Wed, January 20, 2010 09:23, Robert Milkowski wrote:
Now you rsync all the data from your clients to a dedicated filesystem
per client, then create a snapshot.
Is there an rsync out there that can reliably replicate all file
On 20/01/2010 19:20, Ian Collins wrote:
Julian Regel wrote:
It is actually not that easy.
Compare a cost of 2x x4540 with 1TB disks to equivalent solution on
LTO.
Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot
spare
+ 2x OS disks.
The four raidz2 group form a single
Robert Milkowski wrote:
On 20/01/2010 19:20, Ian Collins wrote:
Julian Regel wrote:
It is actually not that easy.
Compare a cost of 2x x4540 with 1TB disks to equivalent solution on
LTO.
Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot
spare
+ 2x OS disks.
The four
Robert Milkowski wrote:
I think one should actually compare whole solutions - including servers,
fc infrastructure, tape drives, robots, software costs, rack space, ...
Servers like x4540 are ideal for zfs+rsync backup solution - very
compact, good $/GB ratio, enough CPU power for its
On 21/01/2010 09:07, Ian Collins wrote:
Robert Milkowski wrote:
On 20/01/2010 19:20, Ian Collins wrote:
Julian Regel wrote:
It is actually not that easy.
Compare a cost of 2x x4540 with 1TB disks to equivalent solution
on LTO.
Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on licensing here as you need a one
client license per x4540 but in fact can backup data from many clients which
are there.
Which brings
On Jan 21, 2010, at 3:55 AM, Julian Regel wrote:
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on licensing here as you need a one
client license per x4540 but in fact can backup
Julian Regel wrote:
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on licensing here as you need
a one client license per x4540 but in fact can backup data from many
clients which are
Allen Eastwood wrote:
On Jan 19, 2010, at 22:54 , Ian Collins wrote:
Allen Eastwood wrote:
On Jan 19, 2010, at 18:48 , Richard Elling wrote:
Many people use send/recv or AVS for disaster recovery on the inexpensive
side. Obviously, enterprise backup systems also provide DR
On 19 jan 2010, at 20.11, Ian Collins wrote:
Julian Regel wrote:
Based on what I've seen in other comments, you might be right.
Unfortunately, I don't feel comfortable backing up ZFS filesystems because
the tools aren't there to do it (built into the operating system or using
Richard Elling richard.ell...@gmail.com wrote:
ufsdump/restore was perfect in that regard. The lack of equivalent
functionality is a big problem for the situations where this functionality
is a business requirement.
How quickly we forget ufsdump's limitations :-). For example, it
Ian Collins i...@ianshome.com wrote:
The correct way to archivbe ACLs would be to put them into extended POSIX
tar
attrubutes as star does.
See http://cdrecord.berlios.de/private/man/star/star.4.html for a format
documentation or have a look at ftp://ftp.berlios.de/pub/star/alpha,
Edward Ned Harvey sola...@nedharvey.com wrote:
Star implements this in a very effective way (by using libfind) that is
even
faster that the find(1) implementation from Sun.
Even if I just find my filesystem, it will run for 7 hours. But zfs can
create my whole incremental snapshot in a
While I can appreciate that ZFS snapshots are very useful in being able to
recover files that users might have deleted, they do not do much to help when
the entire disk array experiences a crash/corruption or catches fire. Backing
up to a second array helps if a) the array is off-site and for
If you like to have a backup that allows to access files, you need a file
based
backup and I am sure that even a filesystem level scan for recently changed
files will not be much faster than what you may achive with e.g. star.
Note that ufsdump directly accesees the raw disk device and
Julian Regel jrmailgate-zfsdisc...@yahoo.co.uk wrote:
If you like to have a backup that allows to access files, you need a file
based
backup and I am sure that even a filesystem level scan for recently changed
files will not be much faster than what you may achive with e.g. star.
While I am sure that star is technically a fine utility, the problem is that
it is effectively an unsupported product.
From this viewpoint, you may call most of Solaris unsupported.
From the perspective of the business, the contract with Sun provides that
support.
If our customers find a
1 - 100 of 223 matches
Mail list logo