Re: r273165. ZFS ARC: possible memory leak to Inact

2014-11-05 Thread Dmitriy Makarov
Steven Hartland wrote
> On 05/11/2014 06:15, Marcus Reid wrote:
>> On Tue, Nov 04, 2014 at 06:13:44PM +, Steven Hartland wrote:
>>> On 04/11/2014 17:22, Allan Jude wrote:
 snip...
 Justin Gibbs and I were helping George from Voxer look at the same
 issue
 they are having. They had ~169GB in inact, and only ~60GB being used
 for
 ARC.

 Are there any further debugging steps we can recommend to him to help
 investigate this?
>>> The various scripts attached to the ZS ARC behavior problem and fix PR
>>> will help provide detail this.
>>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594
>>>
>>> I've seen it here where there's been bursts of ZFS I/O specifically
>>> write bursts.
>>>
>>> What happens is that ZFS will consume large amounts of space in various
>>> UMA zones to accommodate these bursts.
>> If you push the vmstat -z that he provided through the arc summary
>> script, you'll see that this is not what is happening.  His uma stats
>> match up with his arc, and do not account for his inactive memory.
>>
>> uma script summary:
>>
>>  Totals
>>  oused: 5.860GB, ofree: 1.547GB, ototal: 7.407GB
>>  zused: 56.166GB, zfree: 3.918GB, ztotal: 60.084GB
>>  used: 62.026GB, free: 5.465GB, total: 67.491GB
>>
>> His provided top stats:
>>
>>  Mem: 19G Active, 20G Inact, 81G Wired, 59M Cache, 3308M Buf, 4918M
>> Free
>>  ARC: 66G Total, 6926M MFU, 54G MRU, 8069K Anon, 899M Header, 5129M
>> Other
>>
>>
>> The big uma buckets (zio_buf_16384 and zio_data_buf_131072, 18.002GB and
>> 28.802GB respectively) are nearly 0% free.
>>
> Still potentially accounts for 5.4GB of your 20GB inact.
> 
> The rest could be malloc backed allocations?

No. 

There are few reasons for that. 
The first one is that Inact constantly grows, and 20GiB you see were 50GiBs
before we ran the script.
(We have to run it periodically or else our production server will grow
slower and slower)

The second argumens is that our codebase is the same, the only thing that
have changed is OS version.
In the previous version Inact was dramatically much smaller: ~hundrets of
megabytes. 



--
View this message in context: 
http://freebsd.1045724.n5.nabble.com/r273165-ZFS-ARC-possible-memory-leak-to-Inact-tp5962410p5962711.html
Sent from the freebsd-current mailing list archive at Nabble.com.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: r273165. ZFS ARC: possible memory leak to Inact

2014-11-04 Thread Dmitriy Makarov
ripcb:  400, 4192760,   0,  60,   6,   0,   0
unpcb:  240, 4192768,1166,1074,  28,   0,   0
rtentry:200,  0,   8,  92,   8,   0,   0
selfd:   56,  0,2339,3270,6167642044,   0,   0
SWAPMETA:   288, 16336788,   0,   0,   0,   0,   0
FFS inode:  168,  0,1032,1084, 1308978,   0,   0
FFS1 dinode:128,  0,   0,   0,   0,   0,   0
FFS2 dinode:256,  0,1032,1098, 1308978,   0,   0
NCLNODE:528,  0,   0,   0,   0,   0,   0

this is staticticts after script helped to reclaim memory.

Here's top statistics:

Mem: 19G Active, 20G Inact, 81G Wired, 59M Cache, 3308M Buf, 4918M Free
ARC: 66G Total, 6926M MFU, 54G MRU, 8069K Anon, 899M Header, 5129M Other



Steven Hartland wrote
> This is likely spikes in uma zones used by ARC.
> 
> The VM doesn't ever clean uma zones unless it hits a low memory 
> condition, which explains why your little script helps.
> 
> Check the output of vmstat -z to confirm.
> 
> On 04/11/2014 11:47, Dmitriy Makarov wrote:
>> Hi Current,
>>
>> It seems like there is constant flow (leak) of memory from ARC to Inact
>> in FreeBSD 11.0-CURRENT #0 r273165.
>>
>> Normally, our system (FreeBSD 11.0-CURRENT #5 r260625) keeps ARC size
>> very close to vfs.zfs.arc_max:
>>
>> Mem: 16G Active, 324M Inact, 105G Wired, 1612M Cache, 3308M Buf, 1094M
>> Free
>> ARC: 88G Total, 2100M MFU, 78G MRU, 39M Anon, 2283M Header, 6162M Other
>>
>>
>> But after an upgrade to (FreeBSD 11.0-CURRENT #0 r273165) we observe
>> enormous numbers of Inact memory in the top:
>>
>> Mem: 21G Active, 45G Inact, 56G Wired, 357M Cache, 3308M Buf, 1654M Free
>> ARC: 42G Total, 6025M MFU, 30G MRU, 30M Anon, 819M Header, 5214M Other
>>
>> Funny thing is that when we manually allocate and release memory, using
>> simple python script:
>>
>> #!/usr/local/bin/python2.7
>>
>> import sys
>> import time
>>
>> if len(sys.argv) != 2:
>>  print "usage: fillmem 
> 
> "
>>  sys.exit()
>>
>> count = int(sys.argv[1])
>>
>> megabyte = (0,) * (1024 * 1024 / 8)
>>
>> data = megabyte * count
>>
>> as:
>>
>> # ./simple_script 1
>>
>> all those allocated megabyes 'migrate' from Inact to Free, and afterwards
>> they are 'eaten' by ARC with no problem.
>> Until Inact slowly grows back to the number it was before we ran the
>> script.
>>
>> Current workaround is to periodically invoke this python script by cron.
>> This is an ugly workaround and we really don't like it on our production
>>
>>
>> To answer possible questions about ARC efficience:
>> Cache efficiency drops dramatically with every GiB pushed off the ARC.
>>
>> Before upgrade:
>>  Cache Hit Ratio:99.38%
>>
>> After upgrade:
>>  Cache Hit Ratio:81.95%
>>
>> We believe that ARC misbehaves and we ask your assistance.
>>
>>
>> --
>>
>> Some values from configs.
>>
>> HW: 128GB RAM, LSI HBA controller with 36 disks (stripe of mirrors).
>>
>> top output:
>>
>> In /boot/loader.conf :
>> vm.kmem_size="110G"
>> vfs.zfs.arc_max="90G"
>> vfs.zfs.arc_min="42G"
>> vfs.zfs.txg.timeout="10"
>>
>> ---
>>
>> Thanks.
>>
>> Regards,
>> Dmitriy
>> ___
>> 

> freebsd-current@

>  mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-current
>> To unsubscribe, send any mail to "

> freebsd-current-unsubscribe@

> "
> 
> ___

> freebsd-current@

>  mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "

> freebsd-current-unsubscribe@

> "





--
View this message in context: 
http://freebsd.1045724.n5.nabble.com/r273165-ZFS-ARC-possible-memory-leak-to-Inact-tp5962410p5962421.html
Sent from the freebsd-current mailing list archive at Nabble.com.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


r273165. ZFS ARC: possible memory leak to Inact

2014-11-04 Thread Dmitriy Makarov
Hi Current,

It seems like there is constant flow (leak) of memory from ARC to Inact in 
FreeBSD 11.0-CURRENT #0 r273165. 

Normally, our system (FreeBSD 11.0-CURRENT #5 r260625) keeps ARC size very 
close to vfs.zfs.arc_max:

Mem: 16G Active, 324M Inact, 105G Wired, 1612M Cache, 3308M Buf, 1094M Free
ARC: 88G Total, 2100M MFU, 78G MRU, 39M Anon, 2283M Header, 6162M Other


But after an upgrade to (FreeBSD 11.0-CURRENT #0 r273165) we observe enormous 
numbers of Inact memory in the top:

Mem: 21G Active, 45G Inact, 56G Wired, 357M Cache, 3308M Buf, 1654M Free
ARC: 42G Total, 6025M MFU, 30G MRU, 30M Anon, 819M Header, 5214M Other

Funny thing is that when we manually allocate and release memory, using simple 
python script:

#!/usr/local/bin/python2.7

import sys
import time

if len(sys.argv) != 2:
print "usage: fillmem "
sys.exit()

count = int(sys.argv[1])

megabyte = (0,) * (1024 * 1024 / 8)

data = megabyte * count

as:

# ./simple_script 1

all those allocated megabyes 'migrate' from Inact to Free, and afterwards they 
are 'eaten' by ARC with no problem.
Until Inact slowly grows back to the number it was before we ran the script.

Current workaround is to periodically invoke this python script by cron.
This is an ugly workaround and we really don't like it on our production 


To answer possible questions about ARC efficience:
Cache efficiency drops dramatically with every GiB pushed off the ARC.

Before upgrade:  
Cache Hit Ratio:99.38%

After upgrade:
Cache Hit Ratio:81.95%

We believe that ARC misbehaves and we ask your assistance.


--

Some values from configs.

HW: 128GB RAM, LSI HBA controller with 36 disks (stripe of mirrors). 

top output:

In /boot/loader.conf :
vm.kmem_size="110G"
vfs.zfs.arc_max="90G"
vfs.zfs.arc_min="42G"
vfs.zfs.txg.timeout="10"

---

Thanks.

Regards,
Dmitriy
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re[3]: vmstat -z: zfs related failures on r255173

2013-10-15 Thread Dmitriy Makarov
vfs.zfs.txg.synctime_ms 1000
vfs.zfs.txg.timeout 5
vfs.zfs.vdev.cache.max  16384
vfs.zfs.vdev.cache.size 16777216
vfs.zfs.vdev.cache.bshift   14
vfs.zfs.vdev.trim_on_init   1
vfs.zfs.vdev.max_pending200
vfs.zfs.vdev.min_pending4
vfs.zfs.vdev.time_shift 29
vfs.zfs.vdev.ramp_rate  2
vfs.zfs.vdev.aggregation_limit  268435456
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.write_gap_limit4096
vfs.zfs.vdev.bio_flush_disable  0
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.trim_max_bytes 2147483648
vfs.zfs.vdev.trim_max_pending   64
vfs.zfs.max_auto_ashift 13
vfs.zfs.zil_replay_disable  0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zio.use_uma 0
vfs.zfs.zio.exclude_metadata0
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_rewrite   2
vfs.zfs.snapshot_list_prefetch  0
vfs.zfs.super_owner 0
vfs.zfs.debug   0
vfs.zfs.version.ioctl   3
vfs.zfs.version.acl 1
vfs.zfs.version.spa 5000
vfs.zfs.version.zpl 5
vfs.zfs.trim.enabled1
vfs.zfs.trim.txg_delay  32
vfs.zfs.trim.timeout30
vfs.zfs.trim.max_interval   1





 
> On 2013-10-15 07:53, Dmitriy Makarov wrote:
> > Please, any idea, thougth, help! 
> > Maybe what information can be useful for diggin - anything...
> >
> > System what I'm talkin about has a huge problem: performance degradation in 
> > short time period (day-two). Don't know can we somehow relate this vmstat 
> > fails with degradation.
> >
> >
> >  
> >> Hi all
> >>
> >> On CURRENT r255173 we have some interesting values from vmstat -z : REQ = 
> >> FAIL
> >>
> >> [server]# vmstat -z
> >> ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
> >> ... skipped
> >> NCLNODE:528,  0,   0,   0,   0,   0,   0
> >> space_seg_cache: 64,  0,  289198,  299554,25932081,25932081,   > >> 0
> >> zio_cache:  944,  0,   37512,   
> >> 50124,1638254119,1638254119,   0
> >> zio_link_cache:  48,  0,   50955,   
> >> 38104,1306418638,1306418638,   0
> >> sa_cache:80,  0,   63694,  56,  198643,198643,   0
> >> dnode_t:864,  0,  128813,   3,  184863,184863,   0
> >> dmu_buf_impl_t: 224,  0, 1610024,  314631,157119686,157119686, 
> >>   0
> >> arc_buf_hdr_t:  216,  0,82949975,   56107,156352659,156352659, 
> >>   0
> >> arc_buf_t:   72,  0, 1586866,  314374,158076670,158076670, 
> >>   0
> >> zil_lwb_cache:  192,  0,6354,7526, 2486242,2486242,   0
> >> zfs_znode_cache:368,  0,   63694,  16,  198643,198643,   0
> >> . skipped ..
> >>
> >> Can anybody explain this strange failures in zfs related parameters in 
> >> vmstat, can we do something with this and is this really bad signal?
> >>
> >> Thanks! 
> >
> > ___
> > freebsd-current at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-current
> > To unsubscribe, send any mail to "freebsd-current-unsubscribe at 
> > freebsd.org"
> I am guessing those 'failures' are failures to allocate memory. I'd
> recommend you install sysutils/zfs-stats and send the list the output of
> 'zfs-stats -a'
> 
> -- 
> Allan Jude 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: vmstat -z: zfs related failures on r255173

2013-10-15 Thread Dmitriy Makarov
Please, any idea, thougth, help! 
Maybe what information can be useful for diggin - anything...

System what I'm talkin about has a huge problem: performance degradation in 
short time period (day-two). Don't know can we somehow relate this vmstat fails 
with degradation.


 
> Hi all
> 
> On CURRENT r255173 we have some interesting values from vmstat -z : REQ = FAIL
> 
> [server]# vmstat -z
> ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
> ... skipped
> NCLNODE:                528,      0,       0,       0,       0,   0,   0
> space_seg_cache:         64,      0,  289198,  299554,25932081,25932081,   0
> zio_cache:              944,      0,   37512,   50124,1638254119,1638254119,  
>  0
> zio_link_cache:          48,      0,   50955,   38104,1306418638,1306418638,  
>  0
> sa_cache:                80,      0,   63694,      56,  198643,198643,   0
> dnode_t:                864,      0,  128813,       3,  184863,184863,   0
> dmu_buf_impl_t:         224,      0, 1610024,  314631,157119686,157119686,   0
> arc_buf_hdr_t:          216,      0,82949975,   56107,156352659,156352659,   0
> arc_buf_t:               72,      0, 1586866,  314374,158076670,158076670,   0
> zil_lwb_cache:          192,      0,    6354,    7526, 2486242,2486242,   0
> zfs_znode_cache:        368,      0,   63694,      16,  198643,198643,   0
> . skipped ..
> 
> Can anybody explain this strange failures in zfs related parameters in 
> vmstat, can we do something with this and is this really bad signal?
> 
> Thanks! 


___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

vmstat -z: zfs related failures on r255173

2013-10-11 Thread Dmitriy Makarov
Hi all

On CURRENT r255173 we have some interesting values from vmstat -z : REQ = FAIL

[server]# vmstat -z
ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
... skipped
NCLNODE:                528,      0,       0,       0,       0,   0,   0
space_seg_cache:         64,      0,  289198,  299554,25932081,25932081,   0
zio_cache:              944,      0,   37512,   50124,1638254119,1638254119,   0
zio_link_cache:          48,      0,   50955,   38104,1306418638,1306418638,   0
sa_cache:                80,      0,   63694,      56,  198643,198643,   0
dnode_t:                864,      0,  128813,       3,  184863,184863,   0
dmu_buf_impl_t:         224,      0, 1610024,  314631,157119686,157119686,   0
arc_buf_hdr_t:          216,      0,82949975,   56107,156352659,156352659,   0
arc_buf_t:               72,      0, 1586866,  314374,158076670,158076670,   0
zil_lwb_cache:          192,      0,    6354,    7526, 2486242,2486242,   0
zfs_znode_cache:        368,      0,   63694,      16,  198643,198643,   0
. skipped ..

Can anybody explain this strange failures in zfs related parameters in vmstat, 
can we do something with this and is this really bad signal?

Thanks!
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

ZFS L2ARC - incorrect size and abnormal system load on r255173

2013-10-07 Thread Dmitriy Makarov
Hi all,

On our production system on r255173 we have problem with abnormal high system 
load caused (not sure) with L2ARC placed on a few SSD, 490 GB total size.

After a fresh boot everything seems to be fine, Load Average less then 5.00.
But after some time (nearly day-two) Load Average jump to 10.. 20+ and system 
goes really slow, IO opearations with zfs pool grows from ms to even seconds. 

And L2ARC sysctls are pretty disturbing:

[frv:~]$ sysctl -a | grep l2

vfs.zfs.l2arc_write_max: 2500
vfs.zfs.l2arc_write_boost: 5000
vfs.zfs.l2arc_headroom: 8
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_feed_min_ms: 30
vfs.zfs.l2arc_noprefetch: 0
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_norw: 1
vfs.zfs.l2c_only_size: 1525206040064
vfs.cache.numfullpathfail2: 4
kstat.zfs.misc.arcstats.evict_l2_cached: 6592742547456
kstat.zfs.misc.arcstats.evict_l2_eligible: 734016778752
kstat.zfs.misc.arcstats.evict_l2_ineligible: 29462561417216
kstat.zfs.misc.arcstats.l2_hits: 576550808
kstat.zfs.misc.arcstats.l2_misses: 128158998
kstat.zfs.misc.arcstats.l2_feeds: 1524059
kstat.zfs.misc.arcstats.l2_rw_clash: 1429740
kstat.zfs.misc.arcstats.l2_read_bytes: 2896069043200
kstat.zfs.misc.arcstats.l2_write_bytes: 2405022640128
kstat.zfs.misc.arcstats.l2_writes_sent: 826642
kstat.zfs.misc.arcstats.l2_writes_done: 826642
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 1059415
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 1640
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 8580680
kstat.zfs.misc.arcstats.l2_abort_lowmem: 2096
kstat.zfs.misc.arcstats.l2_cksum_bad: 212832715
kstat.zfs.misc.arcstats.l2_io_error: 5501886
kstat.zfs.misc.arcstats.l2_size: 1587962307584
kstat.zfs.misc.arcstats.l2_asize: 1425666718720
kstat.zfs.misc.arcstats.l2_hdr_size: 82346948208
kstat.zfs.misc.arcstats.l2_compress_successes: 41707766
kstat.zfs.misc.arcstats.l2_compress_zeros: 0
kstat.zfs.misc.arcstats.l2_compress_failures: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 8847701930
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 21220076
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 27619372107
kstat.zfs.misc.arcstats.l2_write_in_l2: 418007172085
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 29279
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 131001473113
kstat.zfs.misc.arcstats.l2_write_full: 63699
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 1524059
kstat.zfs.misc.arcstats.l2_write_pios: 826642
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 8433038008130560
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 96529899
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 9228464


Here is output from zfs-stats about L2:

[frv:~]$ zfs-stats -L


ZFS Subsystem ReportMon Oct  7 20:50:19 2013


L2 ARC Summary: (DEGRADED)
Passed Headroom:21.22m
Tried Lock Failures:8.85b
IO In Progress: 29.32k
Low Memory Aborts:  2.10k
Free on Write:  8.59m
Writes While Full:  63.71k
R/W Clashes:1.43m
Bad Checksums:  213.07m
IO Errors:  5.51m
SPA Mismatch:   27.62b

L2 ARC Size: (Adaptive) 1.44TiB
Header Size:5.19%   76.70   GiB

L2 ARC Evicts:
Lock Retries:   1.64k
Upon Reading:   0

L2 ARC Breakdown:   705.25m
Hit Ratio:  81.82%  577.01m
Miss Ratio: 18.18%  128.24m
Feeds:  1.52m

L2 ARC Buffer:
Bytes Scanned:  7.49PiB
Buffer Iterations:  1.52m
List Iterations:96.55m
NULL List Iterations:   9.23m

L2 ARC Writes:
Writes Sent:100.00% 826.96k

--


In /boot/loader.conf (128GB RAM in system):

vm.kmem_size="110G"
vfs.zfs.arc_max="100G"
vfs.zfs.arc_min="80G"
vfs.zfs.vdev.cache.size=16M
vfs.zfs.vdev.cache.max="16384"

vfs.zfs.txg.timeout="10"
vfs.zfs.write_limit_min="134217728"

vfs.zfs.vdev.cache.bshift="14"
vfs.zfs.arc_meta_limit=53687091200

vfs.zfs.l2arc_write_max=25165824
vfs.zfs.l2arc_write_boost=50331648
vfs.zfs.l2arc_noprefetch=0

In /etc/sysctl.conf:

vfs.zfs.l2arc_write_max=2500
vfs.zfs.l2arc_write_boost=5000
vfs.zfs.l2arc_noprefetch=0
vfs.zfs.l2arc_headroom=8
vfs.zfs.l2arc_feed_min_ms=30
vfs.zfs.arc_meta_limit=53687091200




How can L2 AR

Re[2]: ZFS secondarycache on SSD problem on r255173

2013-09-18 Thread Dmitriy Makarov
The attached patch by Steven Hartland fixes issue for me too. Thank you! 


--- Исходное сообщение --- 
От кого: "Steven Hartland" < kill...@multiplay.co.uk > 
Дата: 18 сентября 2013, 01:53:10 

- Original Message - 
From: "Justin T. Gibbs" < 

--- 
Дмитрий Макаров 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Re[3]: ZFS secondarycache on SSD problem on r255173

2013-09-16 Thread Dmitriy Makarov
And have to say that ashift of a main pool doesn't matter. 
I've tried to create pool with ashift 9 (default value) and with ashift 12 with 
creating gnops over gpart devices, export pool, destroy gnops, import pool.  
There is the same problem with cache device. 

There is no problem with ZIL devices, they reports  ashift: 12 

   children[1]: 
    type: 'disk' 
    id: 1 
    guid: 6986664094649753344 
    path: '/dev/gpt/zil1' 
    phys_path: '/dev/gpt/zil1' 
    whole_disk: 1 
    metaslab_array: 67 
    metaslab_shift: 25 
    ashift: 12 
    asize: 4290248704 
    is_log: 1 
    create_txg: 22517 

Problem with cache devices only, but in zdb output tere is nothing at all about 
them. 

--- Исходное сообщение --- 
От кого: "Steven Hartland" < kill...@multiplay.co.uk > 
Дата: 16 сентября 2013, 14:18:31 

Cant say I've ever had a issue with gnop, but I haven't used it for
some time.

I did have a quick look over the weekend at your issue and it looks
to me like warning for the cache is a false positive, as the vdev
for cache doesn't report an ashift in zdb so could well be falling
back to a default value.

I couldn't reproduce the issue for log here, it just seems to work
for me, can you confirm what ashift is reported for your devices
using: zdb 

Regards
Steve
- Original Message - 
From: "Borja Marcos" < 

--- 
Дмитрий Макаров 


--- 
Дмитрий Макаров 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Re[2]: ZFS secondarycache on SSD problem on r255173

2013-09-16 Thread Dmitriy Makarov
There is no problem with ZIL devices, they reports  ashift: 12 

   children[1]: 
    type: 'disk' 
    id: 1 
    guid: 6986664094649753344 
    path: '/dev/gpt/zil1' 
    phys_path: '/dev/gpt/zil1' 
    whole_disk: 1 
    metaslab_array: 67 
    metaslab_shift: 25 
    ashift: 12 
    asize: 4290248704 
    is_log: 1 
    create_txg: 22517 

Problem with cache devices only, but in zdb output tere is nothing at all about 
them. 

--- Исходное сообщение --- 
От кого: "Steven Hartland" < kill...@multiplay.co.uk > 
Дата: 16 сентября 2013, 14:18:31 

Cant say I've ever had a issue with gnop, but I haven't used it for
some time.

I did have a quick look over the weekend at your issue and it looks
to me like warning for the cache is a false positive, as the vdev
for cache doesn't report an ashift in zdb so could well be falling
back to a default value.

I couldn't reproduce the issue for log here, it just seems to work
for me, can you confirm what ashift is reported for your devices
using: zdb 

Regards
Steve
- Original Message - 
From: "Borja Marcos" < 

--- 
Дмитрий Макаров 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"