Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-08 Thread Ronny Aasen

your .conf references mds.1   (id =1)
but your command starts the mds with id=scvirt03

so the block in ceph.conf is not used.
replace [mds.1] with  [mds.scvirt03]

btw: iirc you can not have just numerical id's for mds's for some 
versions now, so mds.1 would not be valid either.



kind regards
Ronny Aasen


On 08. aug. 2018 07:54, Vadim Bulst wrote:

Hi Alwin,

thanks for your advise. But no success. Still same error.

mds-section:

[mds.1]
     host = scvirt03
     keyring = /var/lib/ceph/mds/ceph-scvirt03/keyring

Vadim


On 07.08.2018 15:30, Alwin Antreich wrote:

Hello Vadim,

On Tue, Aug 7, 2018, 12:13 Vadim Bulst 
wrote:


Dear list,

I'm trying to bring up a second mds with no luck.

This is what my ceph.conf looks like:

[global]

    auth client required = cephx
    auth cluster required = cephx
    auth service required = cephx
    cluster network = 10.10.144.0/24
    filestore xattr use omap = true
    fsid = 5349724e-fa96-4fd6-8e44-8da2a39253f7
    keyring = /etc/pve/priv/$cluster.$name.keyring
    osd journal size = 5120
    osd pool default min size = 1
    public network = 172.18.144.0/24
    mon allow pool delete = true

[osd]
    keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.2]
    host = scvirt03
    mon addr = 172.18.144.243:6789

[mon.0]
    host = scvirt01
    mon addr = 172.18.144.241:6789
[mon.1]
    host = scvirt02
    mon addr = 172.18.144.242:6789

[mds.0]
   host = scvirt02
[mds.1]
   host = scvirt03


I did the following to set up the service:

apt install ceph-mds

mkdir /var/lib/ceph/mds

mkdir /var/lib/ceph/mds/ceph-$(hostname -s)

chown -R ceph:ceph /var/lib/ceph/mds

chmod -R 0750 /var/lib/ceph/mds

ceph auth get-or-create mds.$(hostname -s) mon 'allow profile mds' mgr
'allow profile mds' osd 'allow rwx' mds 'allow' >
/var/lib/ceph/mds/ceph-$(hostname -s)/keyring

chmod -R 0600 /var/lib/ceph/mds/ceph-$(hostname -s)/keyring

systemctl enable ceph-mds@$(hostname -s).service

systemctl start ceph-mds@$(hostname -s).service


The service will not start. I also did the same procedure with the first
mds which is running with no problems.

1st mds:

root@scvirt02:/home/urzadmin# systemctl status -l ceph-mds@$(hostname
-s).service
● ceph-mds@scvirt02.service - Ceph metadata server daemon
  Loaded: loaded (/lib/systemd/system/ceph-mds@.service; enabled;
vendor preset: enabled)
 Drop-In: /lib/systemd/system/ceph-mds@.service.d
  └─ceph-after-pve-cluster.conf
  Active: active (running) since Thu 2018-06-07 13:08:58 CEST; 2
months 0 days ago
    Main PID: 612704 (ceph-mds)
  CGroup:
/system.slice/system-ceph\x2dmds.slice/ceph-mds@scvirt02.service
  └─612704 /usr/bin/ceph-mds -f --cluster ceph --id scvirt02
--setuser ceph --setgroup ceph

Jul 29 06:25:01 scvirt02 ceph-mds[612704]: 2018-07-29 06:25:01.792601
7f6e4bae0700 -1 received  signal: Hangup from  PID: 3831071 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Jul 30 06:25:02 scvirt02 ceph-mds[612704]: 2018-07-30 06:25:02.081591
7f6e4bae0700 -1 received  signal: Hangup from  PID: 184355 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Jul 31 06:25:01 scvirt02 ceph-mds[612704]: 2018-07-31 06:25:01.448571
7f6e4bae0700 -1 received  signal: Hangup from  PID: 731440 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Aug 01 06:25:01 scvirt02 ceph-mds[612704]: 2018-08-01 06:25:01.274541
7f6e4bae0700 -1 received  signal: Hangup from  PID: 1278492 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Aug 02 06:25:02 scvirt02 ceph-mds[612704]: 2018-08-02 06:25:02.009054
7f6e4bae0700 -1 received  signal: Hangup from  PID: 1825500 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Aug 03 06:25:02 scvirt02 ceph-mds[612704]: 2018-08-03 06:25:02.042845
7f6e4bae0700 -1 received  signal: Hangup from  PID: 2372815 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Aug 04 06:25:01 scvirt02 ceph-mds[612704]: 2018-08-04 06:25:01.404619
7f6e4bae0700 -1 received  signal: Hangup from  PID: 2919837 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Aug 05 06:25:01 scvirt02 ceph-mds[612704]: 2018-08-05 06:25:01.214749
7f6e4bae0700 -1 received  signal: Hangup from  PID: 3467000 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Aug 06 06:25:01 scvirt02 ceph-mds[612704]: 2018-08-06 06:25:01.149512
7f6e4bae0700 -1 received  signal: Hangup from  PID: 4014197 task name:
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw  
UID: 0

Aug 07 06:25:01 scvirt02 ceph-mds[612704]: 2018-08-07 06:25:01.863104
7f6e4bae0700 -1 received  signal: Ha

Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-08 Thread Alwin Antreich
Hi,

On Wed, Aug 08, 2018 at 07:54:45AM +0200, Vadim Bulst wrote:
> Hi Alwin,
> 
> thanks for your advise. But no success. Still same error.
> 
> mds-section:
> 
> [mds.1]
>     host = scvirt03
>     keyring = /var/lib/ceph/mds/ceph-scvirt03/keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring

So it will work for every MDS that you setup. Besides that, no extra
options would be needed for MDS to start.

More then the bellow lines should not be needed to get the mds started.

mkdir -p /var/lib/ceph/mds/ceph-$SERVER
chown -R ceph:ceph /var/lib/ceph/mds/ceph-$SERVER
ceph --cluster ceph --name client.bootstrap-mds \
--keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth \
get-or-create mds.$SERVER osd 'allow rwx' mds 'allow' mon 'allow profile mds' \
-o /var/lib/ceph/mds/ceph-$SERVER/keyring

If it's not working, whats the output of 'systemctl status
ceph-mds@'?

--
Cheers,
Alwin

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-08 Thread Vadim Bulst

Thanks guys - great help! All up and running :-)


On 08.08.2018 09:22, Alwin Antreich wrote:

Hi,

On Wed, Aug 08, 2018 at 07:54:45AM +0200, Vadim Bulst wrote:

Hi Alwin,

thanks for your advise. But no success. Still same error.

mds-section:

[mds.1]
     host = scvirt03
     keyring = /var/lib/ceph/mds/ceph-scvirt03/keyring

[mds]
 keyring = /var/lib/ceph/mds/ceph-$id/keyring

So it will work for every MDS that you setup. Besides that, no extra
options would be needed for MDS to start.

More then the bellow lines should not be needed to get the mds started.

mkdir -p /var/lib/ceph/mds/ceph-$SERVER
chown -R ceph:ceph /var/lib/ceph/mds/ceph-$SERVER
ceph --cluster ceph --name client.bootstrap-mds \
--keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth \
get-or-create mds.$SERVER osd 'allow rwx' mds 'allow' mon 'allow profile mds' \
-o /var/lib/ceph/mds/ceph-$SERVER/keyring

If it's not working, whats the output of 'systemctl status
ceph-mds@'?

--
Cheers,
Alwin

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


--
Vadim Bulst

Universität Leipzig / URZ
04109  Leipzig, Augustusplatz 10

phone: ++49-341-97-33380
mail:vadim.bu...@uni-leipzig.de


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Denis Morejon



El 07/08/18 a las 17:51, Yannis Milios escribió:

  (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate

/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data)



Try the above as it was suggested to you ...



But I suspect I have no space to create an

additional zfs volume since the one mounted on "/" occupied all the space



No, that's a wrong assumption, zfs does not pre-allocate the whole space of
the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but if you
insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and then
add PVE repos ass described in the wiki:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie

That's right. Now I understand that lvm/zfs would be a mess. Mainly because
zfs doesn't create a block devices such as partitions on which I could 
do pvcreate ...

and make It part of a lvm volumen group.

After a (zfs create -V 100G rpool/lvm) a have to do a losetup to create 
a loop device an so on...


Instead, I will keep zfs Raid mounted on "/" (local storage) on the last 
4 Proxmox, remove the local-lvm storage from all Proxmox, and resize the 
local storage of the first 4 Proxmox . In such a way that all the 8 
Proxmox have just local storage making the migration of VMs between 
nodes easy.




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Denis Morejon
Why does Proxmox team have not incorporated a software Raid in the 
install process ? So that we could include redundancy and lvm advantages 
when using local disks.





El 08/08/18 a las 09:23, Denis Morejon escribió:



El 07/08/18 a las 17:51, Yannis Milios escribió:

  (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate

/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data)



Try the above as it was suggested to you ...



But I suspect I have no space to create an
additional zfs volume since the one mounted on "/" occupied all 
the space


No, that's a wrong assumption, zfs does not pre-allocate the whole 
space of

the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but 
if you

insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and 
then

add PVE repos ass described in the wiki:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
That's right. Now I understand that lvm/zfs would be a mess. Mainly 
because
zfs doesn't create a block devices such as partitions on which I could 
do pvcreate ...

and make It part of a lvm volumen group.

After a (zfs create -V 100G rpool/lvm) a have to do a losetup to 
create a loop device an so on...


Instead, I will keep zfs Raid mounted on "/" (local storage) on the 
last 4 Proxmox, remove the local-lvm storage from all Proxmox, and 
resize the local storage of the first 4 Proxmox . In such a way that 
all the 8 Proxmox have just local storage making the migration of VMs 
between nodes easy.




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Andreas Heinlein
Am 08.08.2018 um 15:32 schrieb Denis Morejon:
> Why does Proxmox team have not incorporated a software Raid in the
> install process ? So that we could include redundancy and lvm
> advantages when using local disks. 
Because ZFS offers redundancy and LVM features (and much more) in a more
modern way, e.g. during a rebuild only used blocks need to be
resilvered, resulting in much greater speed. ZFS is intended to entirely
replace MD-RAID and LVM.

Only drawback of ZFS is that it needs bare metal disk access and must
not (or at least should not) be used with hardware RAID controllers.
This makes it difficult to use with older hardware, e.g. HP ProLiants
which only have HP SmartArray controllers as disk controllers. It is
possible to put some RAID controllers in HBA mode, though ZFS docs
advise against it.

Andreas
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread dorsy

I'd say that it is more convinient to support one method.
Also mentioned in this thread that zfs could be considered to be a 
successor of MDraid+LVM.


It is still a debian system with cutom kernel and some pve packages on 
top, so You could do anything just like on any standard debian system.


On 8/8/18 3:32 PM, Denis Morejon wrote:
Why does Proxmox team have not incorporated a software Raid in the 
install process ? So that we could include redundancy and lvm 
advantages when using local disks.





El 08/08/18 a las 09:23, Denis Morejon escribió:



El 07/08/18 a las 17:51, Yannis Milios escribió:

  (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve 
/dev/zvol/rpool/lvm)

and then a LV (lvcreate -L100% pve/data)



Try the above as it was suggested to you ...



But I suspect I have no space to create an
additional zfs volume since the one mounted on "/" occupied all 
the space


No, that's a wrong assumption, zfs does not pre-allocate the whole 
space of

the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested 
above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but 
if you

insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top 
and then

add PVE repos ass described in the wiki:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
That's right. Now I understand that lvm/zfs would be a mess. Mainly 
because
zfs doesn't create a block devices such as partitions on which I 
could do pvcreate ...

and make It part of a lvm volumen group.

After a (zfs create -V 100G rpool/lvm) a have to do a losetup to 
create a loop device an so on...


Instead, I will keep zfs Raid mounted on "/" (local storage) on the 
last 4 Proxmox, remove the local-lvm storage from all Proxmox, and 
resize the local storage of the first 4 Proxmox . In such a way that 
all the 8 Proxmox have just local storage making the migration of VMs 
between nodes easy.




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Dietmar Maurer
> Why does Proxmox team have not incorporated a software Raid in the 
> install process ? 

Because we consider mdraid unreliable and dangerous.

> So that we could include redundancy and lvm advantages 
> when using local disks.

Sorry, but we have software raid included - ZFS provides that.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user