Hi All,
ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs
on it. Is it true ?
I formatted a device with VTOC lable and I created a ZFS file system on it.
Now which label the ZFS device has ? is it old VTOC or EFI ?
After creating the ZFS file syste
> In general, your backup software should handle making
> incremental dumps, even from a split mirror. What are
> you using to write data to tape? Are you simply
> dumping the whole file system, rather than using
> standard backup software?
>
We are using Veritas Netbackup 5 MP4. It is performing
My two (everyman's) cents - could something like this be modeled after
MySQL replication or even something like DRBD (drbd.org) ? Seems like
possibly the same idea.
On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote:
Project Overview:
...
___
zfs-discus
[EMAIL PROTECTED] said:
> The reality is that
> ZFS turns on the write cache when it owns the
> whole disk.
> _Independantly_ of that,
> ZFS flushes the write cache when ZFS needs to insure
> that data reaches stable storage.
>
> The point is that the flushes occur whether
In general, your backup software should handle making incremental dumps, even
from a split mirror. What are you using to write data to tape? Are you simply
dumping the whole file system, rather than using standard backup software?
ZFS snapshots use a pure copy-on-write model. If you have a block
The affected DIMM? Did you have memory errors before this?
The message you posted looked like a ZFS encountered an error writing to the
drive (which could, admittedly, have been caused by bad memory).
This message posted from opensolaris.org
___
zf
> Often, the spare is up and running but for whatever reason you'll have a
> bad block on it and you'll die during the reconstruct.
Shouldn't SCSI/ATA block sparing handle this? Reconstruction should be purely
a matter of writing, so "bit rot" shouldn't be an issue; or are there cases I'm
not
Brilliant video, guys.
Totally agreed, great work.
Boy, would I like to see Peter Stormare in that video %)
-Artem.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
For your reading pleasure:
http://blogs.sun.com/erickustarz/entry/damaged_files_and_zpool_status
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Constantin Gonzalez wrote:
Hi Richard,
Richard Elling wrote:
FYI,
here is an interesting blog on using ZFS with a dozen USB drives from
Constantin.
http://blogs.sun.com/solarium/entry/solaris_zfs_auf_12_usb
thank you for spotting it :).
We're working on translating the video (hope we get
Jonathan Edwards wrote:
On Feb 2, 2007, at 15:35, Nicolas Williams wrote:
Unlike traditional journalling replication, a continuous ZFS send/recv
scheme could deal with resource constraints by taking a snapshot and
throttling replication until resources become available again.
Replication throt
On Feb 2, 2007, at 15:35, Nicolas Williams wrote:
Unlike traditional journalling replication, a continuous ZFS send/recv
scheme could deal with resource constraints by taking a snapshot and
throttling replication until resources become available again.
Replication throttling would mean losing s
Nicolas Williams wrote:
On Fri, Feb 02, 2007 at 03:17:17PM -0500, Torrey McMahon wrote:
Nicolas Williams wrote:
But a continuous zfs send/recv would be cool too. In fact, I think ZFS
tightly integrated with SNDR wouldn't be that much different from a
continuous zfs send/recv.
Ev
On Fri, Feb 02, 2007 at 03:17:17PM -0500, Torrey McMahon wrote:
> Nicolas Williams wrote:
> >But a continuous zfs send/recv would be cool too. In fact, I think ZFS
> >tightly integrated with SNDR wouldn't be that much different from a
> >continuous zfs send/recv.
>
> Even better with snapshots, a
Nicolas Williams wrote:
On Fri, Jan 26, 2007 at 05:15:28PM -0700, Jason J. W. Williams wrote:
Could the replication engine eventually be integrated more tightly
with ZFS? That would be slick alternative to send/recv.
But a continuous zfs send/recv would be cool too. In fact, I think Z
Marion Hakanson wrote:
However, given the default behavior of ZFS (as of Solaris-10U3) is to
panic/halt when it encounters a corrupted block that it can't repair,
I'm re-thinking our options, weighing against the possibility of a
significant downtime caused by a single-block corruption.
Guess w
On Fri, Jan 26, 2007 at 05:15:28PM -0700, Jason J. W. Williams wrote:
> Could the replication engine eventually be integrated more tightly
> with ZFS? That would be slick alternative to send/recv.
But a continuous zfs send/recv would be cool too. In fact, I think ZFS
tightly integrated with SNDR
Richard Elling wrote:
Good question. If you consider that mechanical wear out is what
ultimately
causes many failure modes, then the argument can be made that a spun down
disk should last longer. The problem is that there are failure modes
which
are triggered by a spin up. I've never seen fi
John Weekley wrote:
>Looks like bad memory. I removed the affected DIMM and haven't had any
>reboots in about 24hrs.
>
>
>
Give mtest86 a whirl on that system.
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Dale Ghent wrote:
Yeah sure it "might" eat into STK profits, but one will still have to
go there for redundant controllers.
Repeat after me: There is no STK. There is only Sun. 8-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
Richard Elling wrote:
One of the benefits of ZFS is that not only is head synchronization not
needed, but also block offsets do not have to be the same. For example,
in a traditional mirror, block 1 on device 1 is paired with block 1 on
device 2. In ZFS, this 1:1 mapping is not required. I be
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you can use both to their full potential together?
Looks like bad memory. I removed the affected DIMM and haven't had any reboots
in about 24hrs.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] wrote on 02/02/2007 11:16:32 AM:
> Hi all,
>
> Longtime reader, first time poster Sorry for the lengthy intro
> and not really sure the title matches what I'm trying to get at... I
> am trying to find a solution where making use of a zfs filesystem
> can shorten our back
Hi all,
Longtime reader, first time poster Sorry for the lengthy intro and not
really sure the title matches what I'm trying to get at... I am trying to find
a solution where making use of a zfs filesystem can shorten our backup window.
Currently, our backup solution takes data from ufs or
> > Is there a performance hit for having what seems to be a zfs on top
> > a zpool on top a zpool?
>
> I would think so. Also a good test would be to write on the final fs a few
> blocks more data then the backing sparse volume actually has available.
> Gut feeling is that will cause a panic on
On Fri, Feb 02, 2007 at 12:25:04AM +0100, Pawel Jakub Dawidek wrote:
> On Thu, Feb 01, 2007 at 11:00:07AM +, Darren J Moffat wrote:
> > Neil Perrin wrote:
> > >No it's not the final version or even the latest!
> > >The current on disk format version is 3. However, it hasn't
> > >diverged much a
> thanks Darren! I got led down the wrong path by following newfs.
>
> Now my other question is. How would you add raw storage to the vtank (virtual
> filesystem) as the usage approached the current underlying raw storage?
You just increase the storage in the underlying pool. In my case, I'd
ju
On Fri, Feb 02, 2007 at 08:46:34AM +, Darren J Moffat wrote:
> My current plan is that once set the encryption property that describes
> which algorithm (mechanism actually: algorithm, key length and mode, eg
> aes-128-ccm) can not be changed, it would be inherited by any clones.
> Creating
[EMAIL PROTECTED] wrote on 02/02/2007 10:34:22 AM:
> thanks Darren! I got led down the wrong path by following newfs.
>
> Now my other question is. How would you add raw storage to the vtank
> (virtual filesystem) as the usage approached the current underlying
> raw storage?
>
> Would you go
thanks Darren! I got led down the wrong path by following newfs.
Now my other question is. How would you add raw storage to the vtank (virtual
filesystem) as the usage approached the current underlying raw storage?
Would you going forward just simply in the normal fashion ( i will try this
when
On Fri, Feb 02, 2007 at 08:46:34AM +, Darren J Moffat wrote:
> Pawel Jakub Dawidek wrote:
> >On Thu, Feb 01, 2007 at 11:00:07AM +, Darren J Moffat wrote:
> >>Neil Perrin wrote:
> >>>No it's not the final version or even the latest!
> >>>The current on disk format version is 3. However, it h
Hi Richard,
Richard Elling wrote:
> FYI,
> here is an interesting blog on using ZFS with a dozen USB drives from
> Constantin.
> http://blogs.sun.com/solarium/entry/solaris_zfs_auf_12_usb
thank you for spotting it :).
We're working on translating the video (hope we get the lip-syncing right.
Hi All,
In my test set up, I have one zpool of size 1000M bytes.
On this zpool, my application writes 100 files each of size 10 MB.
First 96 files were written successfully with out any problem.
But the 97 file is not written successfully , it written only 5 MB (the
retu
Marion, this is a common misintrepetation :
"Anyway, I've also read that if ZFS notices it's using "slices" instead
of
whole disks, it will not enable/use the write cache. "
The reality is that
ZFS turns on the write cache when it owns the
whole disk.
_Independ
Pawel Jakub Dawidek wrote:
On Thu, Feb 01, 2007 at 11:00:07AM +, Darren J Moffat wrote:
Neil Perrin wrote:
No it's not the final version or even the latest!
The current on disk format version is 3. However, it hasn't
diverged much and the znode/acl stuff hasn't changed.
and it will get upd
36 matches
Mail list logo