On Thu, Feb 18, 2016 at 3:56 PM, Richard Elling <
richard.ell...@richardelling.com> wrote:
>
>
> Related to lock manager is name lookup. If you use name services, you add
> a latency
> dependency to failover for name lookups, which is why we often disable DNS
> or other
> network name services on
Am 18.02.16 um 21:57 schrieb Schweiss, Chip:
On Thu, Feb 18, 2016 at 5:14 AM, Michael Rasmussen > wrote:
On Thu, 18 Feb 2016 07:13:36 +0100
Stephan Budach > wrote:
>
> So, when I
Am 18.02.16 um 22:56 schrieb Richard Elling:
comments below...
On Feb 18, 2016, at 12:57 PM, Schweiss, Chip > wrote:
On Thu, Feb 18, 2016 at 5:14 AM, Michael Rasmussen>wrote:
On Thu, 18 Feb 2016
Am 18.02.16 um 12:14 schrieb Michael Rasmussen:
On Thu, 18 Feb 2016 07:13:36 +0100
Stephan Budach wrote:
So, when I issue a simple ls -l on the folder of the vdisks, while the
switchover is happening, the command somtimes comcludes in 18 to 20 seconds,
but sometime ls
On Thu, 18 Feb 2016 07:13:36 +0100
Stephan Budach wrote:
>
> So, when I issue a simple ls -l on the folder of the vdisks, while the
> switchover is happening, the command somtimes comcludes in 18 to 20 seconds,
> but sometime ls will just sit there for minutes.
>
This
Am 18.02.16 um 09:29 schrieb Andrew Gabriel:
On 18/02/2016 06:13, Stephan Budach wrote:
Hi,
I have been test driving RSF-1 for the last week to accomplish the
following:
- cluster a zpool, that is made up from 8 mirrored vdevs, which are
based on 8 x 2 SSD mirrors via iSCSI from another
Am 18.02.16 um 08:59 schrieb Dale Ghent:
Are you using NFS over TCP or UDP?
If using it over TCP, I would expect the TCP connection to get momentarily
unhappy when its connection stalls and packets might need to be retransmitted
after the floating IP's new MAC address is asserted. Have you
On 18/02/2016 06:13, Stephan Budach wrote:
Hi,
I have been test driving RSF-1 for the last week to accomplish the
following:
- cluster a zpool, that is made up from 8 mirrored vdevs, which are
based on 8 x 2 SSD mirrors via iSCSI from another OmniOS box
- export a nfs share from above
If that's the case, perhaps you should check to see if the nfs ports are open
upon failover. If they open just as quickly as the pings respond, then I would
blame the nfs locking managers or nfs in general. The action to remedy that is
beyond my scope other than to try force a remount
Are you using NFS over TCP or UDP?
If using it over TCP, I would expect the TCP connection to get momentarily
unhappy when its connection stalls and packets might need to be retransmitted
after the floating IP's new MAC address is asserted. Have you tried UDP instead?
/dale
> On Feb 18,
Hi Michael,
Am 18.02.16 um 08:17 schrieb Michael Talbott:
While I don't have a setup like you've described, I'm going to take a wild
guess and say check your switches (and servers) ARP tables. Perhaps the switch
isn't updating your VIP address with the other servers MAC address fast enough.
While I don't have a setup like you've described, I'm going to take a wild
guess and say check your switches (and servers) ARP tables. Perhaps the switch
isn't updating your VIP address with the other servers MAC address fast enough.
Maybe as part of the failover script, throw a command to your
Hi,
I have been test driving RSF-1 for the last week to accomplish the
following:
- cluster a zpool, that is made up from 8 mirrored vdevs, which are
based on 8 x 2 SSD mirrors via iSCSI from another OmniOS box
- export a nfs share from above zpool via a vip
- have RSF-1 provide the
13 matches
Mail list logo