I have four file servers all of which have at least two (some three)
aggregates made of two 10GB lines. Three of the four servers are getting
pretty good throughput as reported by iperf:
(hvfs1 has iperf -s, other hosts iperf -c)
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00
I'm doing a fairly standard zfs send | ssh host zfs receive. At least, I
think it's fairly standard:
zfs send -R stor1/admin_homes@tobesent | ssh otherhost zfs receive -dv pool0
I believe permissions on the receiving host are correct:
Permissions on pool0 -
I have an /etc/inet/ndpd.conf file that has exactly two lines:
ifdefault StatelessAddrConf false
ifdefault StatefulAddrConf false
On my test host running 151022 when I
sudo ipadm create-addr -T addrconf aggr0/v6
ipadm show-if shows the interface with an fe80:: address and nothing else.
How
Adam,
I'm having no problems at all with my 151022 hosts. They're all doing
well for NFS reads & writes. I only see the degredation in write speed
on the 151026 host I recently installed.
Have you looked at your scrub performance ?
I had bad scrub performance on a host that had a bad drive
(This doesn't appear to have gone out so I'm re-sending. Apologies if
it's a duplicate.)
On 8/23/18 16:43 , Lee Damon wrote:
(I've just changed from digest to regular subscription as I see there
are messages relevant to this that I haven't received yet...)
Doug, I
(I've just changed from digest to regular subscription as I see there
are messages relevant to this that I haven't received yet...)
Doug, I'm not familiar with the evil zfs tuning wiki mechanism. I'll
have to see if Google can help me find it.
As for the ZIL+ L2ARC on the same SSD potentially bei
Do you mean c0t55CD2E414EC0FF43d0?
It's an SSD. It just has a long name because it's in a hotswap sled instead
being inside the chassis.
Hardware properties:
name='devid' type=string items=1
value='id1,sd@n55cd2e414ec0ff43'
name='class' type=
On 8/23/18 10:22 , Bob Friesenhahn wrote:
logs
mirror-1 ONLINE 0 0 0
c0t55CD2E414EC0FF43d0s0 ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0
cache
c0t55CD2E414EC0FF43d0s1 ONLIN
These are 12TB SAS drives (Seagate ST12000NM0027) for data & hot spare. ZIL
& L2ARC are 480GB INTEL SSDSC2KG48 SSDs. Everything is left at default for
sector size, etc. They were basically prepared for into the pool with a
simple fdisk -B /dev/rdsk/drive.
Ping never shows loss of connectivity. I r
I recently installed a new host. So new I couldn't install LTS on it so
I've installed 151026.
This host is strictly for serving ZFS-based NFS & CIFS. Everything else is
just default.
Over time it has become fairly obvious to me that NFS writes are ... well,
abysmal.
This example is copying a 36
10 matches
Mail list logo