On Fri, Mar 19, 2010 at 12:38 PM, Rob slewb...@yahoo.com wrote:
Can a ZFS send stream become corrupt when piped between two hosts across a
WAN link using 'ssh'?
unless the end computers are bad (memory problems, etc.), then the
answer should be no. ssh has its own error detection method, and
Greetings
I would like to get your recommendation how setup new pool.
I have 4 new 1.5TB disks reserved to new zpool.
I planned to crow/replace existing small 4 disks ( raidz ) setup with new
bigger one.
As new pool will be bigger and will have more personally important data to be
stored long
Ahhh, this has been...interesting...some real personalities involved in
this
discussion. :p The following is long-ish but I thought a re-cap was in
order.
I'm sure we'll never finish this discussion, but I want to at least have a
new
plateau or base from which to consider these questions.
I've
A pool with a 4-wide raidz2 is a completely nonsensical idea. It has the
same amount of accessible storage as two striped mirrors. And would be
slower in terms of IOPS, and be harder to upgrade in the future (you'd need
to keep adding four drives for every expansion with raidz2 - with mirrors
you
On Fri, Mar 19, 2010 at 2:34 PM, taemun tae...@gmail.com wrote:
A pool with a 4-wide raidz2 is a completely nonsensical idea. It has the
same amount of accessible storage as two striped mirrors. And would be
slower in terms of IOPS, and be harder to upgrade in the future (you'd need
to keep
On Fri, Mar 19, 2010 at 06:34:50PM +1100, taemun wrote:
A pool with a 4-wide raidz2 is a completely nonsensical idea.
No, it's not - not completely.
It has the same amount of accessible storage as two striped mirrors. And
would be slower in terms of IOPS, and be harder to upgrade in the
Thanks for comments
So possible choises are :
1) 2 2-way mirros
2) 4 disks raidz2
BTW , can raidz have spare ? so is there one posible choise more :
3 disks raidz with 1 spare ?
Here i prefer data availibility not performance.
And if need sometime to expand / change setup it is then that time
I'm also a Mac user. I use Mozy instead of DropBox, but it sounds like
DropBox should get a place at the table. I'm about to download it in a few
minutes.
I'm right now re-cloning my internal HD due to some HFS+ weirdness. I
have to completely agree that ZFS would be a great addition to MacOS
On Fri, Mar 19, 2010 at 12:59:39AM -0700, homerun wrote:
Thanks for comments
So possible choises are :
1) 2 2-way mirros
2) 4 disks raidz2
BTW , can raidz have spare ? so is there one posible choise more :
3 disks raidz with 1 spare ?
raidz2 is basically this, with a pre-silvered
Funny, I thought the same thing up until a couple of years ago when I
thought Apple should have bought Sun :-)
Cordialement,
Erik Ableson
+33.6.80.83.58.28
Envoyé depuis mon iPhone
On 19 mars 2010, at 09:41, Khyron khyron4...@gmail.com wrote:
Of course, I'm the only person I know who said
My rollback finished yesterday after about 7.5 days. It still wasn't ready
to receive the last snapshot, so I rm'ed all the files (took 14 hours) and then
issued the rollback command again, 2 minutes this time.
Ok, I now have many questions, some due to a couple of responses (which don't
sata
disks don't understand the prioritisation, so
Er, the point was exactly that there is no
discrimination, once the
request is handed to the disk.
So, do you say that SCSI drives do understand prioritisation (i.e. TCQ supports
the schedule from ZFS), while SATA/NCQ drives
Now, NDMP doesn't do you much good for a locally attached tape drive,
as Darren and Svein pointed out. However, provided the software which is
installed on this fictional server can talk to the tape in an
appropriate way,
then all you have to do is pipe zfs send into it. Right? What did I
Hello,
After being immersed in this list and other ZFS sites for the past few weeks I
am having some doubts about the zpool layout on my new server. It's not too
late to make a change so I thought I would ask for comments. My current plan to
to have 12 x 1.5 TB disks in a what I would normally
On Fri, March 19, 2010 00:38, Rob wrote:
Can a ZFS send stream become corrupt when piped between two hosts across a
WAN link using 'ssh'?
For example a host in Australia sends a stream to a host in the UK as
follows:
# zfs send tank/f...@now | ssh host.uk receive tank/bar
In general,
On Fri, March 19, 2010 02:28, homerun wrote:
Greetings
I would like to get your recommendation how setup new pool.
I have 4 new 1.5TB disks reserved to new zpool.
I planned to crow/replace existing small 4 disks ( raidz ) setup with new
bigger one.
As new pool will be bigger and will
On Fri, 19 Mar 2010, David Dyer-Bennet wrote:
However, these legacy mechanisms aren't guaranteed to give you the
less-than-one-wrong-bit-in-10^15 level of accuracy people tend to want for
enterprise backups today (or am I off a couple of orders of magnitude
there?). They were defined when
Darren J Moffat darren.mof...@oracle.com wrote:
That assumes you are writing the 'zfs send' stream to a file or file
like media. In many cases people using 'zfs send' for they backup
strategy are they are writing it back out using 'zfs recv' into another
pool. In those cases the files
Mike Gerdts mger...@gmail.com wrote:
another server, where the data is immediately fed through zfs receive then
it's an entirely viable backup technique.
Richard Elling made an interesting observation that suggests that
storing a zfs send data stream on tape is a quite reasonable thing to
On 19/03/2010 14:57, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarren.mof...@oracle.com wrote:
That assumes you are writing the 'zfs send' stream to a file or file
like media. In many cases people using 'zfs send' for they backup
strategy are they are writing it back out
You will get much better random IO with mirrors, and better reliability when a
disk fails with raidz2. Six sets of mirrors are fine for a pool. From what I
have read, a hot spare can be shared across pools. I think the correct term
would be load balanced mirrors, vs RAID 10.
What kind of
On Fri, 19 Mar 2010, Khyron wrote:
Getting better FireWire performance on OpenSolaris would be nice though.
Darwin drivers are open...hmmm.
OS-X is only (legally) used on Apple hardware. Has anyone considered
that since Firewire is important to Apple, they may have selected a
particular
One of the reasons I am investigating solaris for
this is sparse volumes and dedupe could really help
here. Currently we use direct attached storage on
the dom0s and allocate an LVM to the domU on
creation. Just like your example above, we have lots
of those 80G to start with please
Damon (and others)
For those wanting the ability to perform file backups/restores along with all
metadata, without resorting to third party applications, if you have a Sun
support contract, log a call asking that your organisation be added to the list
of users who wants to see RFE #5004379
On Fri, March 19, 2010 09:49, Bob Friesenhahn wrote:
On Fri, 19 Mar 2010, David Dyer-Bennet wrote:
However, these legacy mechanisms aren't guaranteed to give you the
less-than-one-wrong-bit-in-10^15 level of accuracy people tend to want
for
enterprise backups today (or am I off a couple of
Darren J Moffat darr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?
You cannot get a single file out of the zfs send datastream.
ZFS system attributes (as
Hi all,
I'm trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open 'oradata_fs1': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0%
On Fri, 19 Mar 2010, David Dyer-Bennet wrote:
I don't think of stream crypto as inherently including validity checking,
though in practice I suppose it would always be a good idea.
This is obviously a vital and necessary function of ssh in order to
defend against man in the middle attacks.
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?
You cannot get a single file
On Fri, March 19, 2010 11:33, Darren J Moffat wrote:
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape
On 19/03/2010 17:19, David Dyer-Bennet wrote:
On Fri, March 19, 2010 11:33, Darren J Moffat wrote:
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the
On Fri, March 19, 2010 12:25, Darren J Moffat wrote:
On 19/03/2010 17:19, David Dyer-Bennet wrote:
On Fri, March 19, 2010 11:33, Darren J Moffat wrote:
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a
I think I'm seeing an error in the output from zfs list with regards
to snapshot space utilization.
In the first list, there are 818M used by snapshots, but the snaps
listed aren't using anything close to that amount. If I destroy the
first snapshot, then the second one suddenly jumps in space
They way we do this here is:
zfs snapshot voln...@snapnow
[i]#code to break on error and email not shown.[/i]
zfs send -i voln...@snapbefore voln...@snapnow | pigz -p4 -1 file
[i]#code to break on error and email not shown.[/i]
scp /dir/file u...@remote:/dir/file
[i]#code to break on error and
On 03/20/10 09:28 AM, Richard Jahnel wrote:
They way we do this here is:
zfs snapshot voln...@snapnow
[i]#code to break on error and email not shown.[/i]
zfs send -i voln...@snapbefore voln...@snapnow | pigz -p4 -1 file
[i]#code to break on error and email not shown.[/i]
scp /dir/file
no, but I'm slightly paranoid that way. ;)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bh == Brandon High bh...@freaks.com writes:
bh I think I'm seeing an error in the output from zfs list with
bh regards to snapshot space utilization.
no bug. You just need to think harder about it: the space used cannot
be neatly put into buckets next to each snapshot that add to the
On Fri, Mar 19, 2010 at 5:32 AM, Chris Dunbar - Earthside, LLC
cdun...@earthside.net wrote:
if I went with two? Finally, would I be better off with raidz2 or something
else instead of the striped mirrored sets? Performance and fault tolerance
are my highest priorities.
Performance and fault
On 19 Mar 2010, at 15:30, Bob Friesenhahn wrote:
On Fri, 19 Mar 2010, Khyron wrote:
Getting better FireWire performance on OpenSolaris would be nice though.
Darwin drivers are open...hmmm.
OS-X is only (legally) used on Apple hardware. Has anyone considered that
since Firewire is
Chris Dunbar - Earthside, LLC wrote:
Hello,
After being immersed in this list and other ZFS sites for the past few weeks I am having
some doubts about the zpool layout on my new server. It's not too late to make a change
so I thought I would ask for comments. My current plan to to have 12 x
The point I think Bob was making is that FireWire is an Apple technology, so
they have a vested interest in making sure it works well on their systems
and
with their OS. They could even have a specific chipset that they
exclusively
use in their systems, although I don't see why others couldn't
Responses inline...
On Tue, Mar 16, 2010 at 07:35, Robin Axelsson
gu99r...@student.chalmers.sewrote:
I've been informed that newer versions of ZFS supports the usage of hot
spares which is denoted for drives that are not in use but available for
resynchronization/resilvering should one of the
Most discussions I have seen about RAID 5/6 and why it stops working seem to
base their conclusions solely on single drive characteristics and statistics.
It seems to me there is a missing component in the discussion of drive
failures in the real world context of a system that lives in an
12 disks in mirrored pairs is a small configuration. The smaller sets
you referrer to might be the number of disks in a raidz/raidz2/raidz3
top level vdev.
You say performance is one of your top priorities but what is the
workload ? Mostly read ? Mostly write ? Random ? Sequential ?
See
On 19 mars 2010, at 17:11, Joerg Schilling wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?
You cannot get a single file out of the zfs send datastream.
zfs send is a block-level
Erik,
I don't think there was any confusion about the block nature of zfs send
vs. the file nature of star. I think what this discussion is coming down to
is
the best ways to utilize zfs send as a backup, since (as Darren Moffat has
noted) it supports all the ZFS objects and metadata.
I see 2
ZFS+CIFS even provides
Windows Volume Shadow Services so that Windows users can do this on
their own.
I'll need to look into that, when I get a moment. Not familiar with
Windows Volume Shadow Services, but having people at home able to do
this
directly seems useful.
I'd like to spin
k == Khyron khyron4...@gmail.com writes:
k FireWire is an Apple technology, so they have a vested
k interest in making sure it works well [...] They could even
k have a specific chipset that they exclusively use in their
k systems,
yes, you keep repeating yourselves, but
ZFS+CIFS even provides
Windows Volume Shadow Services so that Windows users can do this on
their own.
I'll need to look into that, when I get a moment. Not familiar with
Windows Volume Shadow Services, but having people at home able to do
this
directly seems useful.
Even in
I'll say it again: neither 'zfs send' or (s)tar is an enterprise (or
even home) backup system on their own one or both can be components
of
the full solution.
I would be pretty comfortable with a solution thusly designed:
#1 A small number of external disks, zfs send onto the disks and
1. NDMP for putting zfs send streams on tape over the network. So
Tell me if I missed something here. I don't think I did. I think this
sounds like crazy talk.
I used NDMP up till November, when we replaced our NetApp with a Solaris Sun
box. In NDMP, to choose the source files, we had the
It would appear that the bus bandwidth is limited to about 10MB/sec
(~80Mbps) which is well below the theoretical 400Mbps that 1394 is
supposed to be able to handle. I know that these two disks can go
significantly higher since I was seeing 30MB/sec when they were used on
Macs previously in
52 matches
Mail list logo