On Mon, Apr 21, 2008 at 10:41:35AM +1200, Ian Collins wrote:
> Sam wrote:
> > I have a 10x500 disc file server with ZFS+, do I need to perform any sort
> > of periodic maintenance to the filesystem to keep it in tip top shape?
> >
> No, but if there are problems, a periodic scrub will tip you off
thank you. this is exactly what I was looking for.
This is for remote replication so it looks like I am out of luck.
bummer.
Asa
On Apr 14, 2008, at 4:09 PM, Jeff Bonwick wrote:
> Not at present, but it's a good RFE. Unfortunately it won't be
> quite as simple as just adding an ioctl to repor
Sam wrote:
> I have a 10x500 disc file server with ZFS+, do I need to perform any sort of
> periodic maintenance to the filesystem to keep it in tip top shape?
>
>
No, but if there are problems, a periodic scrub will tip you off sooner
rather than later.
Ian
__
Peter Tribble wrote:
> On Sun, Apr 20, 2008 at 5:48 AM, Vincent Fox <[EMAIL PROTECTED]> wrote:
>> I would hope at least it has that giant FSYNC patch for ZFS already present?
>>
>> We ran into this issue and it nearly killed Solaris here in our Data Center
>> as a product it was such a bad experi
I have a 10x500 disc file server with ZFS+, do I need to perform any sort of
periodic maintenance to the filesystem to keep it in tip top shape?
Sam
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
On Sun, 20 Apr 2008, Peter Tribble wrote:
>>
>> What is the cause of the "struggling"? Does the backup host run short of
>> RAM or CPU? If backups are incremental, is a large portion of time spent
>> determining the changes to be backed up? What is the relative cost of many
>> small files vs la
On Sun, Apr 20, 2008 at 4:39 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Sun, 20 Apr 2008, Peter Tribble wrote:
> >
> > My experience so far is that anything past a terabyte and 10 million
> files,
> > and any backup software struggles.
> >
>
> What is the cause of the "struggling"? Does
Hi,
First of all, my apologies for some of my posts appearing 2 or even 3 times
here, the forum seems to be acting up, and although I received a Java exception
for those double postings and they never appeared yesterday, apparently they
still made it through eventually.
Back on topic: I fruitl
On Sun, 20 Apr 2008, Peter Tribble wrote:
>> Does anyone here have experience of this with multi-TB filesystems and
>> any of these solutions that they'd be willing to share with me please?
>
> My experience so far is that anything past a terabyte and 10 million files,
> and any backup software s
On Sun, Apr 20, 2008 at 5:48 AM, Vincent Fox <[EMAIL PROTECTED]> wrote:
> I would hope at least it has that giant FSYNC patch for ZFS already present?
>
> We ran into this issue and it nearly killed Solaris here in our Data Center
> as a product it was such a bad experience.
>
> Fix was in 12772
> ZFS can use block sizes up to 128k. If the data is compressed, then
> this size will be larger when decompressed.
ZFS allows you to use variable blocksizes (sized a power of 2 from 512
to 128k), and as far as I know, a compressed block is put into the
smallest fitting one.
-mg
___
On Wed, Apr 16, 2008 at 2:12 PM, Anna Langley <[EMAIL PROTECTED]> wrote:
>
> I've just joined this list, and am trying to understand the state of
> play with using free backup solutions for ZFS, specifically on a Sun
> x4500.
...
> Does anyone here have experience of this with multi-TB filesyst
Hi,
ZFS can use block sizes up to 128k. If the data is compressed, then
this size will be larger when decompressed.
So, can the decompressed data be larger than 128k? If so, does this
also hold for metadata? In other words,
can I have a 128k block on the disk with, for instance, indirect block
13 matches
Mail list logo