On Tue, Sep 04, 2007 at 11:21:46AM +0200, Hildebrand, Nils, 232 wrote:
> Hi,
>
> just an additional tip:
> The inode-Number, the IP, the minor and the major number of the device
> that serves NFS have to be the same to keep the same NFS-file-handle.
> So either setup the drbd-devices accordingly o
Hi,
just an additional tip:
The inode-Number, the IP, the minor and the major number of the device
that serves NFS have to be the same to keep the same NFS-file-handle.
So either setup the drbd-devices accordingly on both sides or use lvm on
top of drbd and keep minor/major of lvm in sync.
Kind r
Hi Stefan,
Stefan Lasiewski wrote:
BakBone support tells me that Replicator will not keep the inodes in
sync unfortunately.
Does DRBD keep inodes in sync? I didn't know that was possible to
transfer inodes from one filesystem to another filesystem. I thought
the inodes were explicitly set on e
Stefan,
Stefan Lasiewski wrote:
I have the following haresources (It's copied to cib.xml) on each
host. Shouldn't this start up and shut down services in the correct
order?
The correct start order is:
1. rpc.lockd
2. rpc.statd
3. export filesystems with exportfs
4. rpc.nfsd
5. rpc.mountd
6. b
I don't know this Replicator, does it keep the inodes in sync?
If it does not, then it's not the right product for you.
Why are you not using DRBD?
BakBone support tells me that Replicator will not keep the inodes in
sync unfortunately.
Does DRBD keep inodes in sync? I didn't know that was poss
DRBD operates below the filesystem (Distributed Replicated Block
Device), effectively replicating each "disk" block between two hosts.
So, yes, AFAIK, the filesystems will be truly identical (assuming
up-to-date sync, of course...)
Yan
Stefan Lasiewski wrote:
>> I don't know this Replicator, does
In addition, I see these NFS options in the 'rpc.statd' man page.
These sound relevant, but I cannot find additional documentation on
these options:
--
SIGNALS
SIGUSR1 causes rpc.statd to re-read the notify list from disk and send
notifications to clients. This can be us
When I shut down fs1 (primary NFS server), the following happens:
- fs1 removes bond0
- fs1 shuts down nfslock
- fs1 shuts down nfs
- fs2 brings up bond0
- fs2 starts up nfslock
- fs2 starts up nfs
This doesn't look right to me, shouldn't it first bring up nfs,
and finally start bond0? Otherw