D sh j

Sent from my Verizon, Samsung Galaxy smartphone
-------- Original message --------From: Yannis Milios <yannis.mil...@gmail.com> 
Date: 7/27/18  7:42 AM  (GMT-06:00) To: Roland Kammerer 
<roland.kamme...@linbit.com> Cc: drbd-user <drbd-user@lists.linbit.com> 
Subject: Re: [DRBD-user] linstor-proxmox-2.8 

Thanks for the explanation, this was helpful. Currently testing on a 'lab' 
environment. 
I've got some questions, most are related to linstor itself and not 
linstor-proxmox specific, hopefully this is the correct thread to expand these 
questions...
- What's the difference between installing linstor-server package only (which 
includes linstor-controller and linstor-satellite) and by installing 
linstor-controller, linstor-satellite separately ?In Linstor documentation, it 
is mentioned that linstor-server package should be installed on all nodes. 
However, in your blog post you mention linstor-controller,linstor-satellite and 
linstor-client.Then later, you mention 'systemctl start linstor-server' which 
does not exist if you don't install linstor-package. If you try to install 
controller,satellite and server at the same time, the installation fails with 
an error in creating controller and satellite systemd units. Which of the above 
is the correct approach ?
- 3 nodes in the cluster(A,B,C), all configured as 'Combined' nodes, nodeC acts 
as a controller.   Let's assume that nodeA fails and it will not come up any 
soon, so I want to remove it from the   cluster.To accomplish that I use  
"linstor node delete <NodeA>" . The problem is that the node   (which appears 
as OFFLINE) it never gets deleted from the cluster. Obviously the controller, 
is awaiting for the dead node's confirmation and refuses to remove its entry if 
it doesn't. Is there any way to force   remove the dead node from the database 
?  Same applies when deleting a RD,R,VD from the same node. In DM there was a 
(-f) force option,   which was useful in such situations.

- Is there any option to wipe all cluster information, similar to "drbdmanage 
uninit" in order to start      from scratch? Purging all linstor packages does 
not seem to reset this information.
- If nodeC (controller) dies, then logically must decide which of the surviving 
nodes will replace it, let's say nodeB is selected as controller node. After 
starting linstor-controller service on nodeB and giving "linstor n l" , there 
are no nodes cluster nodes in the list. Does this mean we have to re-create the 
cluster from scratch (guess no) or there's a way to import the config from the 
dead nodeC?
thanks in advance,Yannis

Short answer: somehow if you really know what your are doing. No don't

do that.



because:

- you can not use both plugins at the same time. Both claim the "drbd"

  name. Long story, it has to be like this. Hardcoded "drbd" in

  Plugin.pm which is out of our control.

- DM/LS would not overwrite each others res files, but depending on your

  configuration/default ports/minors, the results (one res file from DM,

  one unrelated from LINSTOR might conflict because of port/minor

  collisions).



So if you want to test the LINSTOR stuff/plugin, do it in a "lab".



Migration will be possible, also "soon" (testing the plugin and linstor

makes this soon sooner ;-) ). Roughly it will be a DM export of the DB +

a linstor (client) command that reads that json dump and generates

linstor commands to add these resources to the LINSTOR DB (with the

existing ports/minors,...). LINSTOR is then clever enough to not create

new meta-data, it will see that these resources are up and fine. This

will be a documented procedure for which steps you do in what order.



Regards, rck

_______________________________________________

drbd-user mailing list

drbd-user@lists.linbit.com

http://lists.linbit.com/mailman/listinfo/drbd-user


_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to