thanks for answering !
I have another question.
Is the Write Speed on /dev/drbd0 equals to the Transmission Speed of
the data through the network?
thanks.
On Tue, Sep 21, 2010 at 11:40 PM, Lars Ellenberg
wrote:
> On Tue, Sep 21, 2010 at 09:30:06AM -0600, Mike Lovell wrote:
>> Tomki wrote:
>>
On Fri, 24 Sep 2010, Matt Ball, IT Hardware Manager wrote:
Is there any way to connect a bonded NIC (trunk) and have drbd communicate
through it?
Yes, I use dual point-to-point links (with regular cables, not crossover)
with balance-rr and MTU=9000, dedicated to drbd. The NIC's are Intel
825
I have setup NIC bonding with DRBD successfully but we need to try to
reduce the amount of switch ports required as we need to remotely deploy
several clusters. We have 2 bonded nics, total of 4 NICs/cables plus one
for iLo. So we have 6 that must go into the switch (3 from each cluster)
but w
On Fri, Sep 24, 2010 at 11:12:35AM +0200, Nicolae Mihalache wrote:
> Hello,
>
> I've been reading about the barriers (no-disk-barrier option) in drbd.
> I understand that when the primary gets a IO completion notification,
> it will issue a barrier request (actually start a new epoch) to the
> sec
On Fri, Sep 24, 2010 at 02:35:04PM +0200, Pavlos Parissis wrote:
> Hi,
>
> Here is a situation from which I want either automatic (by the cluster) or
> manually (by the admin) to recover from.
>
> DRBD resource runs on node 1
> shutdown all nodes in a such order which will not cause a failover of
Andreas,
By saying "If I make a benchmark by writing with dd to the local (virtual
disk, lying on storage cluster) of the test-VM, I get only about 20-25 MB/s."
what is your block size. I guess the IOPS is really what you should concern
about. The other thing is, try to assign Ethernet HBA as "dir
Hi,
Here is a situation from which I want either automatic (by the cluster) or
manually (by the admin) to recover from.
DRBD resource runs on node 1
shutdown all nodes in a such order which will not cause a failover of the
resources
start the node 2 which was secondary prior the shutdown.
As we
Hi @all,
I've got the following Openfiler HA-Cluster configuration here:
Primary: VirtualMachine (on ESXi 4):
2x2Ghz Intel Xeon
1GB RAM
Secondary: physical Machine:
Intel Atom 330 1,6Ghz
1GB RAM
Both have two GBit interfaces, one for replication an the other one for
direct access.
Now I created
Hello,
I've been reading about the barriers (no-disk-barrier option) in drbd.
I understand that when the primary gets a IO completion notification,
it will issue a barrier request (actually start a new epoch) to the
secondary.
However, if the disk of the primary has a write cache, it will
immediat
On 24/09/10 11:56, Michael wrote:
try different kernel
I try:
2.6.26 stable
2.6.32-bpo5 from backports
2.6.32 testing
Same thing.
I try OCFS2 1.4.1 & 1.4.4-3 - same thing.
So I think - it is a DRBD.
--
Best regards,
Proskurin Kirill
___
drbd-user
try different kernel
On Fri, Sep 24, 2010 at 7:54 PM, Proskurin Kirill
wrote:
> On 24/09/10 07:11, Michael wrote:
>
>> Hi,
>> from your log it is look like it is ocf2 problem:
>> >/sys/fs/o2cb/interface_revision
>>
>
> I write to OCFS2 list and send them a stack trace - they say what OCFS2 not
On 24/09/10 07:11, Michael wrote:
Hi,
from your log it is look like it is ocf2 problem:
>/sys/fs/o2cb/interface_revision
I write to OCFS2 list and send them a stack trace - they say what OCFS2
not use netlink but it is inside stacktrace - so they think it is not a
OCFS2 problem.
or could
12 matches
Mail list logo