Le 11/10/15 22:11, Richard PALO a écrit :
Not to get paranoid, but very recently I'm noticing occasional issues with the
file system,
mostly with thunderbird giving
Le dossier « Courrier entrant » ne peut être compacté à cause d'une erreur
d'écriture. Vérifiez qu'il reste assez d'espace disque et que vous possédez le
droit d'écriture sur ce système de fichiers avant d'essayer à nouveau.
basically saying that it can't compact because of a write error...
Working on a build where I log the output via tee I got:
ichard@omnis:/home/richard/src/pkgsrc/mail/thunderbird31$ bmake |& tee bar.log=>
Bootstrap dependency digest>=20010302: found digest-20121220
=> Bootstrap dependency digest>=20010302: found digest-20121220
tee: read error [No space left on device]
=> Bootstrap dependency digest>=20010302: found digest-20121220
=> Bootstrap dependency digest>=20010302: found digest-20121220
tee: write error [No space left on device]
=> Bootstrap dependency digest>=20010302: found digest-20121220
=> Bootstrap dependency digest>=20010302: found digest-20121220
richard@omnis:/home/richard/src/pkgsrc/mail/thunderbird31$ zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
dpool 238G 222G 16,2G - 72% 93% 1.00x ONLINE -
rpool 36,8G 21,9G 14,9G - 63% 59% 1.00x ONLINE -
as you can see, I have plenty of space, not logged errors either:
richard@omnis:/home/richard$ zpool status
pool: dpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0 in 0h17m with 0 errors on Thu Oct 1 13:50:34 2015
config:
NAME STATE READ WRITE CKSUM
dpool ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0 in 0h2m with 0 errors on Thu Oct 1 13:35:21 2015
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t5d0s0 ONLINE 0 0 0
errors: No known data errors
I was worried that it may be an ssd going to pot...
It seems to be cyclic, every x minutes or so...
Anybody else seeing funniness? I'm running the gate from 7 october.
actually, things have considerably degraded. Just tried (via ssh) zfs list:
richard@omnis:/home/richard$ zfs list -r dpool
NAME USED AVAIL REFER MOUNTPOINT
dpool 231G 0 248K /dpool
dpool/ROOT 288K 0 288K legacy
dpool/dump 12,0G 0 12,0G -
dpool/export 177G 0 288K /dpool/export
dpool/export/home 156G 0 256K /dpool/export/home
dpool/export/home/krista 3,94G 0 3,94G
/dpool/export/home/krista
dpool/export/home/richard 152G 0 109G
/dpool/export/home/richard
dpool/export/home/richard/src 43,3G 0 43,3G
/dpool/export/home/richard/src
dpool/export/psql_data_64 659M 0 659M
/dpool/export/psql_data_64
dpool/export/pub 19,1G 0 272K /export/pub
dpool/export/pub/pkgsrc 19,1G 0 17,6G /export/pub/pkgsrc
dpool/export/pub/pkgsrc/src 1,54G 0 1,54G /export/pub/pkgsrc/src
dpool/export/zone 1,03G 0 232K /export/zone
dpool/export/zone/dev64 1,03G 0 192K /export/zone/dev64
dpool/export/zone/dev64/ROOT 1,03G 0 192K legacy
dpool/export/zone/dev64/ROOT/zbe-55 7,75M 0 1,03G legacy
dpool/export/zone/dev64/ROOT/zbe-56 1,03G 0 1,03G legacy
dpool/swap 17,0G 8,67G 8,34G -
dpool/var 24,7G 0 192K /var
dpool/var/crash 24,5G 0 192K /var/crash
dpool/var/crash/omnis 24,5G 0 24,5G /var/crash/omnis
dpool/var/postgres 258M 0 336K /var/postgres
dpool/var/postgres/data 257M 0 257M /var/postgres/data
AVAIL = 0 for all but the zvols...?!!
richard@omnis:/home/richard$ pfexec mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp
scsi_vhci zfs sata sd ip hook neti sockfs arp usba stmf stmf_sbd lofs idm cpc
nfs ufs logindmux ptm ]
::memstat
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 787007 3074 9%
ZFS File Data 42705 166 1%
Anon 30417 118 0%
Exec and libs 1158 4 0%
Page cache 5676 22 0%
Free (cachelist) 9515 37 0%
Free (freelist) 7508596 29330 90%
Total 8385074 32754
Physical 8385072 32754
::arc
hits = 176259
misses = 21876
demand_data_hits = 53077
demand_data_misses = 8074
demand_metadata_hits = 115225
demand_metadata_misses = 10590
prefetch_data_hits = 18
prefetch_data_misses = 63
prefetch_metadata_hits = 7939
prefetch_metadata_misses = 3149
mru_hits = 50193
mru_ghost_hits = 0
mfu_hits = 118112
mfu_ghost_hits = 0
deleted = 19
mutex_miss = 0
evict_skip = 939
evict_not_enough = 0
evict_l2_cached = 0 MB
evict_l2_eligible = 0 MB
evict_l2_ineligible = 0 MB
evict_l2_skip = 0
hash_elements = 8263
hash_elements_max = 8264
hash_collisions = 45
hash_chains = 15
hash_chain_max = 1
p = 13817 MB
c = 13817 MB
c_min = 3454 MB
c_max = 27635 MB
size = 253 MB
hdr_size = 2 MB
data_size = 161 MB
metadata_size = 73 MB
other_size = 16 MB
anon_size = 0 MB
anon_evictable_data = 0 MB
anon_evictable_metadata = 0 MB
mru_size = 182 MB
mru_evictable_data = 118 MB
mru_evictable_metadata = 41 MB
mru_ghost_size = 0 MB
mru_ghost_evictable_data = 0 MB
mru_ghost_evictable_metadata = 0 MB
mfu_size = 51 MB
mfu_evictable_data = 42 MB
mfu_evictable_metadata = 4 MB
mfu_ghost_size = 0 MB
mfu_ghost_evictable_data = 0 MB
mfu_ghost_evictable_metadata = 0 MB
l2_hits = 0
l2_misses = 0
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0 MB
l2_write_bytes = 0 MB
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_lock_retry = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_evict_l1cached = 0
l2_free_on_write = 0
l2_cdata_free_on_write = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_size = 0 MB
l2_asize = 0 MB
l2_hdr_size = 0 MB
l2_compress_successes = 0
l2_compress_zeros = 0
l2_compress_failures = 0
memory_throttle_count = 0
duplicate_buffers = 0
duplicate_buffers_size = 0 MB
duplicate_reads = 91
arc_meta_used = 92 MB
arc_meta_limit = 6908 MB
arc_meta_max = 92 MB
arc_meta_min = 1727 MB
sync_wait_for_async = 8
demand_hit_predictive_prefetch = 88
arc_no_grow = 0
arc_tempreserve = 0 MB
gulp... help?
-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com