Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
> Satellite and Controller are quite obvious, Combined is a node that runs
> a Satellite and may sometimes run a Controller, Auxiliary is a node that
> runs neither but is registered for other reasons, this is mostly
> reserved for future features.
>

Can these 'roles' be modified afterwards once set? or they are static, i.e
do we have to remove node and re-add it with another role specification.


> There is a NodeLost API and a corresponding command for it.


Is there a way for an admin to access API and run this command or it's for
dev only use?

it is expected that a system
> administrator will clean up a resource manually if automatic cleanup
> does not work,


I presume here you mean cleaning up resource LVs,ZVOLs etc. Sure, this is
true as long as the node is accessible.


> ...and as soon as LINSTOR detects that the resource has been
> cleaned up properly, it will disappear from LINSTOR's database if the
> resource was marked for deletion.
>

Nice.

There are however no plans to add any force flags like in drbdmanage to
> resource management (or similar) commands, because that frequently
> caused massive desyncs of drbdmanage's state and the real state of
> backend storage resource, as it was frequently misused by
> administrators, who also often expected the various "force" options to
> do something completely different than they actually did.
>

True ...


> Deleting the database will cause LINSTOR to initialize a new database.
> The database could be anywhere depending on how LINSTOR was installed,
> where it currently is can be found out by looking at the connection-url
> setting in controller's database.cfg file.
>

In my case it's in /opt/linstor-server/database.cfg and the entry is..
jdbc:h2:/opt/linstor-server/linstordb

Are you saying that deleting /opt/linstor-server/linstordb will reset all
settings and cause LINSTOR to create a new database file?


> This is supposed to be managed by a cluster resource manager like
> pacemaker.
> Obviously, in a multi-controller HA environment, the controller database
> must be available on all nodes, and there are various possibilities to
> ensure it is
>

Thanks, I think it has already been mentioned that for Proxmox, this will
be in the form of a HA VM appliance, which will be provided by LINBIT.


>
> I'll leave answering the package-related questions to our packaging
> experts.
>

Thanks again..

BR
Yannis
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Terry Hull
D sh j


Sent from my Verizon, Samsung Galaxy smartphone
 Original message From: Yannis Milios  
Date: 7/27/18  7:42 AM  (GMT-06:00) To: Roland Kammerer 
 Cc: drbd-user  
Subject: Re: [DRBD-user] linstor-proxmox-2.8 

Thanks for the explanation, this was helpful. Currently testing on a 'lab' 
environment. 
I've got some questions, most are related to linstor itself and not 
linstor-proxmox specific, hopefully this is the correct thread to expand these 
questions...
- What's the difference between installing linstor-server package only (which 
includes linstor-controller and linstor-satellite) and by installing 
linstor-controller, linstor-satellite separately ?In Linstor documentation, it 
is mentioned that linstor-server package should be installed on all nodes. 
However, in your blog post you mention linstor-controller,linstor-satellite and 
linstor-client.Then later, you mention 'systemctl start linstor-server' which 
does not exist if you don't install linstor-package. If you try to install 
controller,satellite and server at the same time, the installation fails with 
an error in creating controller and satellite systemd units. Which of the above 
is the correct approach ?
- 3 nodes in the cluster(A,B,C), all configured as 'Combined' nodes, nodeC acts 
as a controller.   Let's assume that nodeA fails and it will not come up any 
soon, so I want to remove it from the   cluster.To accomplish that I use  
"linstor node delete " . The problem is that the node   (which appears 
as OFFLINE) it never gets deleted from the cluster. Obviously the controller, 
is awaiting for the dead node's confirmation and refuses to remove its entry if 
it doesn't. Is there any way to force   remove the dead node from the database 
?  Same applies when deleting a RD,R,VD from the same node. In DM there was a 
(-f) force option,   which was useful in such situations.

- Is there any option to wipe all cluster information, similar to "drbdmanage 
uninit" in order to start      from scratch? Purging all linstor packages does 
not seem to reset this information.
- If nodeC (controller) dies, then logically must decide which of the surviving 
nodes will replace it, let's say nodeB is selected as controller node. After 
starting linstor-controller service on nodeB and giving "linstor n l" , there 
are no nodes cluster nodes in the list. Does this mean we have to re-create the 
cluster from scratch (guess no) or there's a way to import the config from the 
dead nodeC?
thanks in advance,Yannis

Short answer: somehow if you really know what your are doing. No don't

do that.



because:

- you can not use both plugins at the same time. Both claim the "drbd"

  name. Long story, it has to be like this. Hardcoded "drbd" in

  Plugin.pm which is out of our control.

- DM/LS would not overwrite each others res files, but depending on your

  configuration/default ports/minors, the results (one res file from DM,

  one unrelated from LINSTOR might conflict because of port/minor

  collisions).



So if you want to test the LINSTOR stuff/plugin, do it in a "lab".



Migration will be possible, also "soon" (testing the plugin and linstor

makes this soon sooner ;-) ). Roughly it will be a DM export of the DB +

a linstor (client) command that reads that json dump and generates

linstor commands to add these resources to the LINSTOR DB (with the

existing ports/minors,...). LINSTOR is then clever enough to not create

new meta-data, it will see that these resources are up and fine. This

will be a documented procedure for which steps you do in what order.



Regards, rck

___

drbd-user mailing list

drbd-user@lists.linbit.com

http://lists.linbit.com/mailman/listinfo/drbd-user


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Robert Altnoeder
On 07/27/2018 02:52 PM, Yannis Milios wrote:
> Did some investigation on the linstor side and realised as a possible
> problem to be the following:
>
> root@pve3:~# linstor r c pve2 vm-101-disk-1
> ERROR:
> Description:
>     The default storage pool 'DfltStorPool' for resource
> 'vm-101-disk-1' for volume number '0' is not deployed on node 'pve2'.
[...]

Unless the storage pool is specified as a parameter to the create
resource command, LINSTOR will select the storage pool named
"DfltStorPool". If that storage pool does not exist on the target node,
then the resource creation fails.

I don't know whether or not the Proxmox plugin allows specifying the
name of the storage pool that should be used for creating resources on
each node. If it doesn't, then whatever storage pool should be used by
Proxmox must be named "DfltStorPool", so it will be selected automatically.

> Where 'DfltStorPool' is being used for ? Is it ok to delete it and
> leave only 'drbdpool' as SPD ?

It is automatically selected if no other storage pool is specified.
Apart from that, it works like user-defined storage pools.

br,
Robert

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Robert Altnoeder
On 07/27/2018 02:41 PM, Yannis Milios wrote:
> - 3 nodes in the cluster(A,B,C), all configured as 'Combined' nodes,
> nodeC acts as a controller.

Satellite and Controller are quite obvious, Combined is a node that runs
a Satellite and may sometimes run a Controller, Auxiliary is a node that
runs neither but is registered for other reasons, this is mostly
reserved for future features.

>  Let's assume that nodeA fails and it will not come up any soon, so I
> want to remove it from the   cluster.To accomplish that I use 
> "linstor node delete " . The problem is that the node   (which
> appears as OFFLINE) it never gets deleted from the cluster. Obviously
> the controller, is awaiting for the dead node's confirmation and
> refuses to remove its entry if it doesn't. Is there any way to force 
>  remove the dead node from the database ? 
>  Same applies when deleting a RD,R,VD from the same node. In DM there
> was a (-f) force option,   which was useful in such situations.

There is a NodeLost API and a corresponding command for it. There are no
force options otherwise, instead, it is expected that a system
administrator will clean up a resource manually if automatic cleanup
does not work, and as soon as LINSTOR detects that the resource has been
cleaned up properly, it will disappear from LINSTOR's database if the
resource was marked for deletion.

In the current version, there are still a few situations where this does
not work, e.g. if an entire storage pool is lost (because if the entire
storage pool does not work, LINSTOR can not process resource deletion on
it). Commands for losing storage will be added, as well as dealing
correctly with certain situations like a non-existent volume group.
There are however no plans to add any force flags like in drbdmanage to
resource management (or similar) commands, because that frequently
caused massive desyncs of drbdmanage's state and the real state of
backend storage resource, as it was frequently misused by
administrators, who also often expected the various "force" options to
do something completely different than they actually did.

> - Is there any option to wipe all cluster information, similar to
> "drbdmanage uninit" in order to start      from scratch? Purging all
> linstor packages does not seem to reset this information.

Deleting the database will cause LINSTOR to initialize a new database.
The database could be anywhere depending on how LINSTOR was installed,
where it currently is can be found out by looking at the connection-url
setting in controller's database.cfg file.

> - If nodeC (controller) dies, then logically must decide which of the
> surviving nodes will replace it, let's say nodeB is selected as
> controller node. After starting linstor-controller service on nodeB
> and giving "linstor n l" , there are no nodes cluster nodes in the
> list. Does this mean we have to re-create the cluster from scratch
> (guess no) or there's a way to import the config from the dead nodeC?

This is supposed to be managed by a cluster resource manager like pacemaker.
Obviously, in a multi-controller HA environment, the controller database
must be available on all nodes, and there are various possibilities to
ensure it is:
- Connect the LINSTOR controller to a centralized database cluster
reachable by all potential controllers
- Put the LINSTOR integrated database on a replicated storage volume,
such as a DRBD volume
- Connect the LINSTOR controller to a local external database and use
database replication to keep the other potential controllers up to date
- Put the LINSTOR integrated database on an NFS server
- etc.

Automatic failover requires the usual cluster magic to make sure node
failures are detected and split brains are avoided (e.g., independent
cluster links, resource- and node-level fencing).

I'll leave answering the package-related questions to our packaging experts.

br,
Robert

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
One last thing I forgot to mention in the last post is ...

When creating a VM or CT via PVE webgui it fails with the below:

https://privatebin.net/?dd4373728501c9eb#FsTXbEfRh43WIV4q7tO5wnm0HdW0O/gJbwavrYCgkeE=

Did some investigation on the linstor side and realised as a possible
problem to be the following:

root@pve3:~# linstor r c pve2 vm-101-disk-1
ERROR:
Description:
The default storage pool 'DfltStorPool' for resource 'vm-101-disk-1'
for volume number '0' is not deployed on node 'pve2'.
Details:
The resource which should be deployed had at least one volume
definition (volume number '0') which LinStor tried to automatically create.
The default storage pool's name for this new volume was looked for in its
volume definition's properties, its resource's properties, its node's
properties and finally in a system wide default storage pool name defined
by the LinStor controller.
Node: pve2, Resource: vm-101-disk-1

If I specify the '--storage-pool drbdpool' option on 'linstor r c pve2
vm-101-disk-1' , then the resource is being assigned properly to the
cluster node.

Could this be the problem that PVE fails as well  ?

Where 'DfltStorPool' is being used for ? Is it ok to delete it and leave
only 'drbdpool' as SPD ?

Thanks
Y
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
Thanks for the explanation, this was helpful. Currently testing on a 'lab'
environment.

I've got some questions, most are related to linstor itself and not
linstor-proxmox specific, hopefully this is the correct thread to expand
these questions...

- What's the difference between installing linstor-server package only
(which includes linstor-controller and linstor-satellite) and by installing
linstor-controller, linstor-satellite separately ?
In Linstor documentation, it is mentioned that linstor-server package
should be installed on all nodes. However, in your blog post you mention
linstor-controller,linstor-satellite and linstor-client.
Then later, you mention 'systemctl start linstor-server' which does not
exist if you don't install linstor-package. If you try to install
controller,satellite and server at the same time, the installation fails
with an error in creating controller and satellite systemd units. Which of
the above is the correct approach ?

- 3 nodes in the cluster(A,B,C), all configured as 'Combined' nodes, nodeC
acts as a controller.
 Let's assume that nodeA fails and it will not come up any soon, so I want
to remove it from the   cluster.To accomplish that I use  "linstor node
delete " . The problem is that the node   (which appears as OFFLINE)
it never gets deleted from the cluster. Obviously the controller, is
awaiting for the dead node's confirmation and refuses to remove its entry
if it doesn't. Is there any way to force   remove the dead node from the
database ?
 Same applies when deleting a RD,R,VD from the same node. In DM there was a
(-f) force option,   which was useful in such situations.

- Is there any option to wipe all cluster information, similar to
"drbdmanage uninit" in order to start  from scratch? Purging all
linstor packages does not seem to reset this information.

- If nodeC (controller) dies, then logically must decide which of the
surviving nodes will replace it, let's say nodeB is selected as controller
node. After starting linstor-controller service on nodeB and giving
"linstor n l" , there are no nodes cluster nodes in the list. Does this
mean we have to re-create the cluster from scratch (guess no) or there's a
way to import the config from the dead nodeC?

thanks in advance,
Yannis

Short answer: somehow if you really know what your are doing. No don't
> do that.
>
> because:
> - you can not use both plugins at the same time. Both claim the "drbd"
>   name. Long story, it has to be like this. Hardcoded "drbd" in
>   Plugin.pm which is out of our control.
> - DM/LS would not overwrite each others res files, but depending on your
>   configuration/default ports/minors, the results (one res file from DM,
>   one unrelated from LINSTOR might conflict because of port/minor
>   collisions).
>
> So if you want to test the LINSTOR stuff/plugin, do it in a "lab".
>
> Migration will be possible, also "soon" (testing the plugin and linstor
> makes this soon sooner ;-) ). Roughly it will be a DM export of the DB +
> a linstor (client) command that reads that json dump and generates
> linstor commands to add these resources to the LINSTOR DB (with the
> existing ports/minors,...). LINSTOR is then clever enough to not create
> new meta-data, it will see that these resources are up and fine. This
> will be a documented procedure for which steps you do in what order.
>
> Regards, rck
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-27 Thread Eric Robinson
> > Lars,
> >
> > I put MySQL databases on the drbd volume. To back them up, I pause
> > them and do LVM snapshots (then rsync the snapshots to an archive
> > server). How could I do that with LVM below drbd, since what I want is
> > a snapshot of the filesystem where MySQL lives?
> 
> You just snapshot below DRBD, after "quiescen" the mysql db.
> 
> DRBD is transparent, the "garbage" (to the filesystem) of the "trailing drbd
> meta data" is of no concern.
> You may have to "mount -t ext4" (or xfs or whatever), if your mount and
> libblkid decide that this was a "drbd" type and could not be mounted. They are
> just trying to help, really.
> which is good. but in that case they get it wrong.

Okay, just so I understand

Suppose I turn md4 into a PV and create one volume group 'vg_under_drbd0', and 
logical volume 'lv_under_drbd0' that takes 95% of the space, leaving 5% for 
snapshots.

Then I create my ext4 filesystem directly on drbd0.

At backup time, I quiesce the MySQL instances and create a snapshot of the drbd 
disk.

I can then mount the drbd snapshot as a filesystem?   
 
> 
> > How severely does putting LVM on top of drbd affect performance?
> 
> It's not the "putting LVM on top of drbd" part.
> it's what most people think when doing that:
> use a huge single DRBD as PV, and put loads of unrelated LVS inside of that.
> 
> Which then all share the single DRBD "activity log" of the single DRBD volume,
> which then becomes a bottleneck for IOPS.
> 

I currently have one big drbd disk with one volume group over it and one 
logical volume that takes up 95% of the space, leaving 5% of the volume group 
for snapshots. I run multiple instances of MySQL out of different directories. 
I don't see a way to avoid the activity log bottleneck problem.


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Roland Kammerer
On Fri, Jul 27, 2018 at 09:20:29AM +0100, Yannis Milios wrote:
> Quick question, can we use Linstor side-by-side with DM, without affecting
> one the other ?
> This may be good for testing or perhaps for migrating DM resources to
> Linstor in the future ?

Short answer: somehow if you really know what your are doing. No don't
do that.

because:
- you can not use both plugins at the same time. Both claim the "drbd"
  name. Long story, it has to be like this. Hardcoded "drbd" in
  Plugin.pm which is out of our control.
- DM/LS would not overwrite each others res files, but depending on your
  configuration/default ports/minors, the results (one res file from DM,
  one unrelated from LINSTOR might conflict because of port/minor
  collisions).

So if you want to test the LINSTOR stuff/plugin, do it in a "lab".

Migration will be possible, also "soon" (testing the plugin and linstor
makes this soon sooner ;-) ). Roughly it will be a DM export of the DB +
a linstor (client) command that reads that json dump and generates
linstor commands to add these resources to the LINSTOR DB (with the
existing ports/minors,...). LINSTOR is then clever enough to not create
new meta-data, it will see that these resources are up and fine. This
will be a documented procedure for which steps you do in what order.

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
Quick question, can we use Linstor side-by-side with DM, without affecting
one the other ?
This may be good for testing or perhaps for migrating DM resources to
Linstor in the future ?
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Roland Kammerer
On Thu, Jul 26, 2018 at 05:12:25PM +0200, Julien Escario wrote:
> Le 26/07/2018 à 15:28, Roland Kammerer a écrit :
> > The missing feature is proper size reporting. Reporting meaningful values
> > for thinly allocated storage is a TODO in LINSTOR itself, so currently the
> > plugin reports 8 out of 10TB as free storage. Always. Besides that it
> > should be complete.
> 
> Huh ? Not certain I understand : 8 out 10 (aka 80% ?) or ... why 10TB ?

These are really "random numbers". Just something that is "large enough"
so that you people can test the rest without getting "no space left
messages" from Proxmox. (Even though I did not test, I assume Proxmox
would not let you do that, otherwise this whole size reporting thing
would not make too much sense). And then it was like, "hm, which fantasy
numbers do I chose? - okay, report 8 out of 10TB free, should be large
enough for people testing on really big storage". So obviously don't put
more data on your pool than you have, currently there is no safety belt.
AFAIK, size reporting will be fixed in LINSTOR pretty soon, and I wanted
to get the plugin out, so that people can test the rest, which is IMO
more important.

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Content of DRBD volume is invalid during sync after disk replace

2018-07-27 Thread Roland Kammerer
On Fri, Jul 27, 2018 at 10:54:51AM +1000, Igor Cicimov wrote:
> Hi,
> 
> Is this going to get back ported to 8.4 as well?

Why would you assume it affects 8.4? Spoiler: it does not. And yes, if
something is a "common problem", things get merged between 8.4 and 9.

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user