>
> TW> Am I using send/recv incorrectly or is there something else
> going on here that
> TW> I am missing?
>
>
> It's a known bug.
>
> umount and rollback file system on host 2. You should see 0 used space
> on a snapshot and then it should work.
Bug ID? Is it related to atime changes?
>
>
[EMAIL PROTECTED] wrote on 02/08/2007 10:23:19 AM:
> > I believe there is a write limit (commonly 10
> > writes) on CF and
> > similar storage devices, but I don't know for sure.
> > Apart from that
> > I think it's a good idea.
> >
> >
> > James C. McPherson
>
> As a consequence, the /t
[EMAIL PROTECTED] wrote on 02/02/2007 11:16:32 AM:
> Hi all,
>
> Longtime reader, first time poster Sorry for the lengthy intro
> and not really sure the title matches what I'm trying to get at... I
> am trying to find a solution where making use of a zfs filesystem
> can shorten our back
[EMAIL PROTECTED] wrote on 02/02/2007 10:34:22 AM:
> thanks Darren! I got led down the wrong path by following newfs.
>
> Now my other question is. How would you add raw storage to the vtank
> (virtual filesystem) as the usage approached the current underlying
> raw storage?
>
> Would you go
[EMAIL PROTECTED] wrote on 02/01/2007 01:17:15 PM:
> The ZFS On-Disk specification and other ZFS documentation describe
> the labeling scheme used for the vdevs that comprise a ZFS pool. A
> label entry contains, among other things, an array of uberblocks,
> one of which will point to the a
> > > One of the benefits of ZFS is that not only is head synchronization
not
> > > needed, but also block offsets do not have to be the same. For
example,
> > > in a traditional mirror, block 1 on device 1 is paired with block 1
on
> > > device 2. In ZFS, this 1:1 mapping is not required. I
> One of the benefits of ZFS is that not only is head synchronization not
> needed, but also block offsets do not have to be the same. For example,
> in a traditional mirror, block 1 on device 1 is paired with block 1 on
> device 2. In ZFS, this 1:1 mapping is not required. I believe this w
[EMAIL PROTECTED] wrote on 01/29/2007 03:45:58 PM:
> I attempted to increase my zraid from 2 disks to 3, but it looks
> like I added the drive outside of the raid:
>
> # zpool list
>
> NAMESIZEUSED AVAILCAP HEALTH ALTROOT
> amber 1.36T8
[EMAIL PROTECTED] wrote on 01/27/2007 06:48:17 AM:
> is it planned to add some other compression algorithm to zfs ?
>
> lzjb is quite good and especially performing very well, but i`d like
> to have better compression (bzip2?) - no matter how worse
> performance drops with this.
>
> regards
[EMAIL PROTECTED] wrote on 01/26/2007 01:43:35 PM:
> On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
> > On Jan 26, 2007, at 9:42, Gary Mills wrote:
> > >How does this work in an environment with storage that's centrally-
> > >managed and shared between many servers?
> >
> > It wil
een these two unrelated
filesystems by this user)...
I have faith that user quotas are going to come sometime, these how
questions are interesting to me...
-Wade Stuart
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to replace vxvm/vxfs or other solutions. Sure, you
will find people that are viewing this new pooled filesystem with old eyes,
but there are admins on this list that actually understand what they are
missing and the other options for working around these issues. We don't
look at this like a feat
an example, we regularly (6+ times per day) snap a older sun
fileserver that has about 7tb of disk on it (80% used) to a thumper with
rsync and --inplace. our daily (6x snaps per day) growth from delta on the
thumper is usually ~ 2.5 gb.
Wade Stuart
Fallon Worldwide
P: 612.758.266
So,
I am starting to code this CAS block squashing test and I am
wondering if this is something that would more likely extend open Solaris
(it does require additional info in zfs metadata that is currently reserved
space for future, ie zfs revision++) or if I should code against the FUSE
[EMAIL PROTECTED] wrote on 01/19/2007 10:24:47 AM:
> Hi,
>
> With the Flemish government we have more then 100 sites. Most of the
> time the backup is done on a DLT7000 or LTO tape device. A few sites
> are bigger and have a Legato backup solution.
>
> With UFS, the restore is easy on those s
[EMAIL PROTECTED] wrote on 01/18/2007 01:29:23 PM:
> On Thu, 2007-01-18 at 10:51 -0800, Matthew Ahrens wrote:
> > Jeremy Teo wrote:
> > > On the issue of the ability to remove a device from a zpool, how
> > > useful/pressing is this feature? Or is this more along the line of
> > > "nice to h
[EMAIL PROTECTED] wrote on 01/10/2007 05:16:33 PM:
> Hello Jason,
>
> Wednesday, January 10, 2007, 10:54:29 PM, you wrote:
>
> JJWW> Hi Kyle,
>
> JJWW> I think there was a lot of talk about this behavior on the RAIDZ2
vs.
> JJWW> RAID-10 thread. My understanding from that discussion was that
"Dick Davies" <[EMAIL PROTECTED]> wrote on 01/10/2007 05:26:45 AM:
> On 08/01/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > I think that in addition to lzjb compression, squishing blocks that
contain
> > the same data would buy a lot of space for administrators working in
many
> > c
[EMAIL PROTECTED] wrote on 01/09/2007 10:59:08 AM:
> I have a zfs filesystem exported via samba. I can connect to the
> filesystem over CIFS from a Windows box, but I get an access denied
> when I try to create a file. I can create the file just fine from
> the Solaris prompt as the same u
[EMAIL PROTECTED] wrote on 01/08/2007 04:06:46 PM:
>
>
>
>
>
>
> James Carlson <[EMAIL PROTECTED]> wrote on 01/08/2007 03:26:14 PM:
>
> > [EMAIL PROTECTED] writes:
> > > > I was noodling around with creating a backup script for my home
> > > > system, and I ran into a problem that I'm having
James Carlson <[EMAIL PROTECTED]> wrote on 01/08/2007 03:26:14 PM:
> [EMAIL PROTECTED] writes:
> > > I was noodling around with creating a backup script for my home
> > > system, and I ran into a problem that I'm having a little trouble
> > > diagnosing. Has anyone seen anything like this o
Bill Sommerfeld <[EMAIL PROTECTED]> wrote on 01/08/2007 03:41:53 PM:
> > Note that you'd actually have to verify that the blocks were the same;
> > you cannot count on the hash function. If you didn't do this, anyone
> > discovering a collision could destroy the colliding blocks/files.
>
> G
> >
> > Does this seem feasible? Are there any blocking points that I am
missing
> > or unaware of? I am just posting this for discussion, it seems very
> > interesting to me.
> >
>
> Note that you'd actually have to verify that the blocks were the same;
> you cannot count on the hash funct
> I was noodling around with creating a backup script for my home
> system, and I ran into a problem that I'm having a little trouble
> diagnosing. Has anyone seen anything like this or have any debug
> advice?
>
> I did a "zfs create -r" to set a snapshot on all of the members of a
> given p
I have been looking at zfs source trying to get up to speed on the
internals. One thing that interests me about the fs is what appears to be
a low hanging fruit for block squishing CAS (Content Addressable Storage).
I think that in addition to lzjb compression, squishing blocks that contain
th
Matthew,
I really do appreciate this discussion, thank you for taking the time to
go over this with me.
Matthew Ahrens <[EMAIL PROTECTED]> wrote on 01/04/2007 01:49:00 PM:
> [EMAIL PROTECTED] wrote:
> > [9:40am] [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED]
> > [9:41am] [/data/
>From what I have read, it looks like there is a known issue with scrubbing
restarting when any of the other usages of the same code path run
(re-silver, snap ...). It looks like there is a plan to put in a marker so
that scrubbing knows where to start again after being preempted. This is
goo
[EMAIL PROTECTED] wrote on 01/03/2007 04:21:00 PM:
> [EMAIL PROTECTED] wrote:
> > which is not the behavior I am seeing..
>
> Show me the output, and I can try to explain what you are seeing.
[9:36am] [~]:test% zfs create data/test
[9:36am] [~]:test% zfs set compression=on data/test
[9:37am]
Sorry a few corrections, and inserts..
>
> which is not the behavior I am seeing.. If I have 100 snaps of a
> filesystem that are relatively low delta churn and then delete half of
the
> data out there I would expect to see that space go up in the used column
> for one of the snaps (in my test
I am bringing this up again with the hopes that more eye may be on the list
now then before the holidays..
the zfs man page lists the usage column as:
used
The amount of space consumed by this dataset and all its
descendants. This is the value that is checked against
[EMAIL PROTECTED] wrote on 12/22/2006 04:50:25 AM:
> Hello Wade,
>
> Thursday, December 21, 2006, 10:15:56 PM, you wrote:
>
>
>
>
>
> WSfc> Hola folks,
>
> WSfc> I am new to the list, please redirect me if I am posting
> to the wrong
> WSfc> location. I am starting to use ZFS in produc
(~20mb) on a/[EMAIL PROTECTED] Is this correct
behavior?
how do you track the total delta blocks the snap is using vs other snaps
and live fs?
Thanks!
Wade Stuart
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
101 - 132 of 132 matches
Mail list logo