On 4/25/2011 6:23 PM, Ian Collins wrote:
On 04/26/11 01:13 PM, Fred Liu wrote:
H, it seems dedup is pool-based not filesystem-based.
That's correct. Although it can be turned off and on at the filesystem
level (assuming it is enabled for the pool).
Which is effectively the same as choosin
Thanks Brandon,
On 04/25/2011 05:47 PM, Brandon High wrote:
On Mon, Apr 25, 2011 at 4:56 PM, Lamp Zy wrote:
I'd expect the spare drives to auto-replace the failed one but this is not
happening.
What am I missing?
Is the autoreplace property set to 'on'?
# zpool get autoreplace fwgpool0
# zp
On Mon, Apr 25, 2011 at 5:26 PM, Brandon High wrote:
> Setting zfs_resilver_delay seems to have helped some, based on the
> iostat output. Are there other tunables?
I found zfs_resilver_min_time_ms while looking. I've tried bumping it
up considerably, without much change.
'zpool status' is still
On 04/26/11 01:13 PM, Fred Liu wrote:
> H, it seems dedup is pool-based not filesystem-based.
That's correct. Although it can be turned off and on at the filesystem
level (assuming it is enabled for the pool).
> If it can have fine-grained granularity(like based on fs), that will be great!
>
H, it seems dedup is pool-based not filesystem-based.
If it can have fine-grained granularity(like based on fs), that will be great!
It is pity! NetApp is sweet in this aspect.
Thanks.
Fred
> -Original Message-
> From: Brandon High [mailto:bh...@freaks.com]
> Sent: 星期二, 四月 26, 2011
On Mon, Apr 25, 2011 at 4:53 PM, Fred Liu wrote:
> So how can I set the quota size on a file system with dedup enabled?
I believe the quota applies to the non-dedup'd data size. If a user
stores 10G of data, it will use 10G of quota, regardless of whether it
dedups at 100:1 or 1:1.
-B
--
Brand
On Mon, Apr 25, 2011 at 4:56 PM, Lamp Zy wrote:
> I'd expect the spare drives to auto-replace the failed one but this is not
> happening.
>
> What am I missing?
Is the autoreplace property set to 'on'?
# zpool get autoreplace fwgpool0
# zpool set autoreplace=on fwgpool0
> I really would like to
On Mon, Apr 25, 2011 at 4:45 PM, Richard Elling
wrote:
> If there is other work going on, then you might be hitting the resilver
> throttle. By default, it will delay 2 clock ticks, if needed. It can be turned
There is some other access to the pool from nfs and cifs clients, but
not much, and mos
Hi,
One of my drives failed in Raidz2 with two hot spares:
# zpool status
pool: fwgpool0
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online
Cindy,
Following is quoted from ZFS Dedup FAQ:
"Deduplicated space accounting is reported at the pool level. You must use the
zpool list command rather than the zfs list command to identify disk space
consumption when dedup is enabled. If you use the zfs list command to review
deduplicated spa
On Apr 25, 2011, at 2:52 PM, Brandon High wrote:
> I'm in the process of replacing drive in a pool, and the resilver
> times seem to have increased with each device. The way that I'm doing
> this is by pulling a drive, physically replacing it, then doing
> 'cfgadm -c configure ; zpool replace
Hi ZFSers,
I've been working on merging the Joyent arcstat enhancements with some of my own
and am now to the point where it is time to broaden the requirements gathering.
The result
is to be merged into the illumos tree.
arcstat is a perl script to show the value of ARC kstats as they change ove
On Mon, Apr 25, 2011 at 8:20 AM, Edward Ned Harvey
wrote:
> and 128k assuming default recordsize. (BTW, recordsize seems to be a zfs
> property, not a zpool property. So how can you know or configure the
> blocksize for something like a zvol iscsi target?)
zvols use the 'volblocksize' property,
I'm in the process of replacing drive in a pool, and the resilver
times seem to have increased with each device. The way that I'm doing
this is by pulling a drive, physically replacing it, then doing
'cfgadm -c configure ; zpool replace tank '. I don't have any
hot-swap bays available, so
On Mon, Apr 25, 2011 at 10:55 AM, Erik Trimble wrote:
> Min block size is 512 bytes.
Technically, isn't the minimum block size 2^(ashift value)? Thus, on
4 KB disks where the vdevs have an ashift=12, the minimum block size
will be 4 KB.
--
Freddie Cash
fjwc...@gmail.com
___
On 04/25/11 11:55, Erik Trimble wrote:
On 4/25/2011 8:20 AM, Edward Ned Harvey wrote:
And one more comment: Based on what's below, it seems that the DDT
gets stored on the cache device and also in RAM. Is that correct?
What if you didn't have a cache device? Shouldn't it *always* be in
r
On 4/25/2011 8:20 AM, Edward Ned Harvey wrote:
There are a lot of conflicting references on the Internet, so I'd
really like to solicit actual experts (ZFS developers or people who
have physical evidence) to weigh in on this...
After searching around, the reference I found to be the most see
So, I install FreeBSD 8.2 with ZFS patch v28 and have this error message
with full freeze zfs system:
Solaris: Warning: can`t open object for zroot/var/crash
log_sysevent: type19 is not emplement
log_sysevent: type19 is not emplement
log_sysevent: type19 is not emplement
log_sysevent: type19 is not
> After modifications that I hope are corrections, I think the post
> should look like this:
>
> The rule-of-thumb is 270 bytes/DDT entry, and 200 bytes of ARC for
> every L2ARC entry.
>
> DDT doesn't count for this ARC space usage
>
> E.g.: I have 1TB of 4k blocks that are to be deduped, and it
There are a lot of conflicting references on the Internet, so I'd really
like to solicit actual experts (ZFS developers or people who have physical
evidence) to weigh in on this...
After searching around, the reference I found to be the most seemingly
useful was Erik's post here:
http://openso
Hi Konstantin,
> zpool status:
> Flash# zpool status
> pool: zroot
> state: DEGRADED
> status: One or more devices are faulted in response to IO failures.
> action: Make sure the affected devices are connected, then run 'zpool
> clear'.
>see: http://www.sun.com/msg/ZFS-8000-HC
> scrub: res
Good morning, I have a problem with ZFS:
ZFS filesystem version 4
ZFS storage pool version 15
Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error
detached,when I copy a big file...
and after reboot in 2 wd green 1tb say me goodbye. One of them die and other
with zfs errors:
Apr 2
22 matches
Mail list logo