fwiw, nfssvrstat breaks down the NFS writes by sync, async, and commits:
explicitly for determining how the workload will impact ZIL. For writing many
files, the (compound) operations can also include creates and sync-on-close
that also impacts performance.
-- richard
> On Aug 23, 2018, at
(This doesn't appear to have gone out so I'm re-sending. Apologies if
it's a duplicate.)
On 8/23/18 16:43 , Lee Damon wrote:
(I've just changed from digest to regular subscription as I see there
are messages relevant to this that I haven't received yet...)
Doug, I'm not familiar with the evil
Evil tuning here:
https://www.solaris-cookbook.eu/solaris/solaris-10-zfs-evil-tuning-guide/
It's at the bottom. Where it says "Disabling the ZIL (Don't)"
I could see a lack of TRIM/erase support in background as a strong
possibility caused by continuous use of blocks from the L2ARC over time.
(I've just changed from digest to regular subscription as I see there
are messages relevant to this that I haven't received yet...)
Doug, I'm not familiar with the evil zfs tuning wiki mechanism. I'll
have to see if Google can help me find it.
As for the ZIL+ L2ARC on the same SSD potentially bei
Out of curiosity, if you disable the zil through the evil zfs tuning
wiki mechanisms (diagnostic purposes only), does it dramatically help?
If not, there's something else going on, if yes, it could be that the
l2arc and zil are interfering with each other (I could imagine that the
l2arc is cau
NFS writes (especially for lots of small files) to omniOS *really*
benefit from having ZIL on those SSD.
You could remove the cache from the pool, carve of an 8GB chunk for ZIL
on each and the rest for L2arc if you want that.
then add a mirrored zil using the 8GB chunks and the other partitio
Hello all!
After some hours of frustration, I wrote:
> I have a very strange problem doing a pkg update on a r151026 system.
> This machine has 11 NGZs, all are lipkg brand.
[...]
> Effectively I cannot pkg update my system including the zones any more.
> I have previously updated this system
Do you mean c0t55CD2E414EC0FF43d0?
It's an SSD. It just has a long name because it's in a hotswap sled instead
being inside the chassis.
Hardware properties:
name='devid' type=string items=1
value='id1,sd@n55cd2e414ec0ff43'
name='class' type=
On Thu, 23 Aug 2018, Lee Damon wrote:
These are 12TB SAS drives (Seagate ST12000NM0027) for data & hot spare. ZIL
& L2ARC are 480GB INTEL SSDSC2KG48 SSDs. Everything is left at default for
sector size, etc. They were basically prepared for into the pool with a
simple fdisk -B /dev/rdsk/drive.
On Thu, 23 Aug 2018, Bob Friesenhahn wrote:
Just in case you did not see my follow-up post, it looks like there is an
error in your pool configuration that a large spinning disk was added as a
log device rather than a SSD as was intended. Luckily it should be possible
to fix this without rest
On Thu, 23 Aug 2018, Doug Hughes wrote:
NFS writes (especially for lots of small files) to omniOS *really* benefit
from having ZIL on those SSD.
You could remove the cache from the pool, carve of an 8GB chunk for ZIL on
each and the rest for L2arc if you want that.
then add a mirrored zil u
Lee,
Just in case you did not see my follow-up post, it looks like there is
an error in your pool configuration that a large spinning disk was
added as a log device rather than a SSD as was intended. Luckily it
should be possible to fix this without restarting the pool from
scratch.
Bob
--
On 8/23/18 10:22 , Bob Friesenhahn wrote:
logs
mirror-1 ONLINE 0 0 0
c0t55CD2E414EC0FF43d0s0 ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0
cache
c0t55CD2E414EC0FF43d0s1 ONLIN
> When I omit the "-r", things change:
>
> # zonename
> omnib0
Wrong cut&paste, the problem is in the GZ.
Thanks -- Volker
--
Volker A. BrandtConsulting and Support for Solaris-based Systems
Brandt & Brandt Co
shows some packages it wants to update, then does it's thing, and
... stops. No new BE is created:
# pkg update -v -rC0 --be-name=ooce-026-20180823
[...]
Planning linked: 9/11 done; 2 working: zone:kayak zone:omnit3
Linked image 'zone:omnit3' output:
| Pack
These are 12TB SAS drives (Seagate ST12000NM0027) for data & hot spare. ZIL
& L2ARC are 480GB INTEL SSDSC2KG48 SSDs. Everything is left at default for
sector size, etc. They were basically prepared for into the pool with a
simple fdisk -B /dev/rdsk/drive.
Ping never shows loss of connectivity. I r
What does 'zpool status poolname' (replace poolname with the name of
the pool which is NFS exported) say?
What is the output of 'iostat -xnE' on your new server?
What is the native block size for the disks you used and what is the
nature of the disks (SATA, SAS, near-term storage, exceptionall
I recently installed a new host. So new I couldn't install LTS on it so
I've installed 151026.
This host is strictly for serving ZFS-based NFS & CIFS. Everything else is
just default.
Over time it has become fairly obvious to me that NFS writes are ... well,
abysmal.
This example is copying a 36
18 matches
Mail list logo