Hi,
what are the current best practices to set up a HA NFS Server? I see
that EPEL no longer contains the drbd packages for CentOS 8 for example.
Also a lot of documents on the internet still refer to either Pacemaker
1.x or meddle with the fsid which apparently is no longer recommended.
What is
Hi,
I'm currently trying to set up a drbd 8.4 resource in a 3-node pacemaker
cluster. The idea is to have nodes storage1 and storage2 running with
the drbd clones and only use the third node storage3 for quorum.
The way I'm trying to do this:
pcs cluster cib cib.xml
pcs -f cib.xml resource create
On 11.09.19 16:51, Ken Gaillot wrote:
> On Tue, 2019-09-10 at 09:54 +0200, Dennis Jacobfeuerborn wrote:
>> Hi,
>> I just updated the timeout for the stop operation on an nfs cluster
>> and
>> while the timeout was update the status suddenly showed
Hi,
I just updated the timeout for the stop operation on an nfs cluster and
while the timeout was update the status suddenly showed this:
Failed Actions:
* nfsserver_monitor_1 on nfs1aqs1 'unknown error' (1): call=41,
status=Timed Out, exitreason='none',
last-rc-change='Tue Aug 13
On 03.11.2017 15:49, Ken Gaillot wrote:
> On Thu, 2017-11-02 at 23:18 +0100, Dennis Jacobfeuerborn wrote:
>> On 02.11.2017 23:08, Dennis Jacobfeuerborn wrote:
>>> Hi,
>>> I'm setting up a redundant NFS server for some experiments but
>>> almost
>>> i
On 02.11.2017 23:08, Dennis Jacobfeuerborn wrote:
> Hi,
> I'm setting up a redundant NFS server for some experiments but almost
> immediately ran into a strange issue. The drbd clone resource never
> promotes either of the to clones to the Master state.
>
> The state says this:
Hi,
I'm setting up a redundant NFS server for some experiments but almost
immediately ran into a strange issue. The drbd clone resource never
promotes either of the to clones to the Master state.
The state says this:
Master/Slave Set: drbd-clone [drbd]
Slaves: [ nfsserver1 nfsserver2 ]
On 31.10.2017 12:58, Ferenc Wágner wrote:
> Dennis Jacobfeuerborn <denni...@conversis.de> writes:
>
>> if I create a new unit file for the new file the services would not
>> depend on it so it wouldn't get automatically mounted when they start.
>
> Put the new unit
Hi,
I'm trying to create a redundant NFS system but hit a problem with the
way the nfs packages on RHEL/CentOS 7 handle the sunrpc mount point.
I put /var/lib/nfs on its own redudant drbd device but on a failover the
nfsserver resource agent complains that something is already mounted
below
On 11.10.2016 12:42, Christine Caulfield wrote:
> I've just committed a bit patch to the master branch of corosync - it is
> now all very experimental, and existing pull requests against master
> might need to be checked. This starts the work on what will hopefully
> become corosync 3.0
>
> The
On 02.06.2016 09:18, Ferenc Wágner wrote:
> "Stephano-Shachter, Dylan" writes:
>
>> I can not figure out why version 4 is not supported.
>
> Have you got fsid=root (or fsid=0) on your root export?
> See man exports.
>
This is apparently no longer recommended:
On 01.06.2016 20:25, Stephano-Shachter, Dylan wrote:
> Hello all,
>
> I have just finished setting up my HA nfs cluster and I am having a small
> problem. I would like to have nfs4 working but whenever I try to mount I
> get the following message,
>
> mount: no type was given - I'll assume nfs
On 18.03.2016 00:50, Digimer wrote:
> On 17/03/16 07:30 PM, Christopher Harvey wrote:
>> On Thu, Mar 17, 2016, at 06:24 PM, Ken Gaillot wrote:
>>> On 03/17/2016 05:10 PM, Christopher Harvey wrote:
If I ignore pacemaker's existence, and just run corosync, corosync
disagrees about node
On 17.03.2016 08:45, Andrei Borzenkov wrote:
> On Wed, Mar 16, 2016 at 9:35 PM, Mike Bernhardt wrote:
>> I guess I have to say "never mind!" I don't know what the problem was
>> yesterday, but it loads just fine today, even when the named config and the
>> virtual ip don't
14 matches
Mail list logo