Sounds great! Hopefully it will handle my 7 nodes correctly
Best regards, Frank
> Am 16.08.2018 um 12:07 schrieb Roland Kammerer :
>
> On Wed, Aug 15, 2018 at 10:22:03AM +0200, Frank Rust wrote:
>> Hi all,
>> since drbdmanage will reach its end-of-life at the end of thi
Hi all,
since drbdmanage will reach its end-of-life at the end of this year, it is time
to think about migration to linstor.
I have a small cluster with 7 nodes, three of them with disks of about 30TB
each.
The storage is about 70% filled.
Since it is no option to start linstor from scratch,
> Am 02.11.2017 um 15:00 schrieb Yannis Milios :
>
>>> (Now I am solving this by migrate vm to node1, unassigned vm-image from
>>> node3, assign vm-image to node 3, migrate vm to node3, unassigned vm-image
>>> from node1, whats awful, error prone and somewhat waste of time)
>
> Why you are doi
Hi all,
I have a question concerning deployment.
Lets assume I have cluster of three nodes of drbd9, all running virtualisation
software (e.g. proxmox)
node1 has 30TB of free storage
node2 has 20TB of free storage
node3 has 10TB of free storage.
When I create a vm, residing on node3 with 80GB
Thanks, that’s probably the cleanest solution!
Regards, Frank
> Am 28.09.2017 um 12:11 schrieb Roland Kammerer :
>
> On Thu, Sep 28, 2017 at 11:51:28AM +0200, Robert Altnoeder wrote:
>> On 09/28/2017 11:10 AM, Frank Rust wrote:
>>> Hi Roland,
>>> I am not a perl
;/usr/bin/drbdmanage', 'net-options', '--resource', $name,
'--allow-two-primaries=yes'], "Could not set 'allow-two-primaries'");
BTW: I am sure that I never added a VM Image that was not multiple of 1GB in
size, but other storage i
he following command to be executed:
Exec cmd( $VAR1 = [
'/usr/bin/drbdmanage',
'new-volume',
'vm-1018-disk-1',
'10.015625'
];
)
I think you should re-think the size calculation to be more precise than the
current
> $size = ($size/1024/1024);
a
Hi all,
I’m not sure, if this is the correct mailing list to make a suggestion for a
new storage plugin (or an extension to the existing LVM-plugin).
I have nodes with 12 disks each creating the drbdpool volume group. This could
give me the opportunity to create LVMs in stripes to get a higher o
er( Secondary -> Unknown )
> Am 14.08.2017 um 16:12 schrieb Roland Kammerer :
>
> On Mon, Aug 14, 2017 at 02:41:56PM +0200, Frank Rust wrote:
>> Hi all,
>> after upgrading DRBD-Dkms to 9.0.8+linbit-1 (on kernel 4.4.67-1-pve) I get
Hi all,
after upgrading DRBD-Dkms to 9.0.8+linbit-1 (on kernel 4.4.67-1-pve) I get
those errors:
Software is: root@virt5:~# dpkg -l |grep drbd
ii drbd-dkms 9.0.8+linbit-1 all
RAID 1 over TCP/IP for Linux module source
ii drbd-utils
Hi folks,
I try to get drbd9 working on my servers. My configuration is following:
I have 5 nodes, 2 of them are primary fileserver (fs1 and fs2)
3 are virtualisation hosts running proxmox (virt1…virt3)
All of them have network cards for derby connections:
10.10.10.33/26
The virtualisation hos
I think I found my problem:
The primary interfaces are firewalled against each other and sometimes not even
in the same networks. So probably the low-level syncing on the storage network
works but the D-Bus and management functions fail.
I added some more hosts and this seems to be the reason. So
> Am 04.04.2017 um 14:12 schrieb Igor Cicimov :
>
> Or simply use different host names for the other network like:
>
> 192.168.1.1 fs1
> 192.168.1.2 fs2
> 192.168.1.3 fs3
> 10.10.10.1 sfs1
> 10.10.10.2 sfs2
> 10.10.10.3 sfs3
>
> and set the cluster using those names:
>
> drbdmanage ini
Hi folks,
I am wondering if it would be possible to create a drbdmanage cluster where the
hostname don’t match the ip address of the network interface to use.
In detail:
I have a three node configuration with the ip visible to the outside:
node1 IP: 192.168.1.1 hostname fs1
node2 IP: 192.168.
I tried to grow a resource on a drbd9 managed cluster.
I have a system with 2 control nodes and 2 satellite nodes.
On the primary control node I did
drbdmanage resize backups 0 1024G
after a while I saw it synced completely. Now I have the problem:
lvs shows me on both control nodes
# lvs
Hi,
I’m testing the new drbdmanage system. I started with the first node, added a
diskless satellite. Everything worked fine.
The I tried to add another control-node (with disks) which is behind a firewall
and for that the connection is tunneled via openvpn. It first seemed to work:
the first n
16 matches
Mail list logo