Re: [ceph-users] CephFS quota

2016-08-14 Thread Willi Fehler

Hello guys,

I found this in the documentation.

1. /Quotas are not yet implemented in the kernel client./Quotas are
   supported by the userspace client (libcephfs, ceph-fuse) but are not
   yet implemented in the Linux kernel client.

I missed this. Sorry.

Regards - Willi


Am 14.08.16 um 08:54 schrieb Willi Fehler:

Hello guys,

my cluster is running on the latest Ceph version. My cluster and my 
client are running on CentOS 7.2.


ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

My Client is using CephFS, I'm not using Fuse. My fstab:

linsrv001,linsrv002,linsrv003:/ /mnt/cephfs ceph 
noatime,dirstat,_netdev,name=cephfs,secretfile=/etc/ceph/cephfs.secret 
0 0


Regards - Willi


Am 13.08.16 um 13:58 schrieb Goncalo Borges:

Hi Willi
If you are using ceph-fuse, to enable quota, you need to pass 
"--client-quota" option in the mount operation.

Cheers
Goncalo


From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of 
Willi Fehler [willi.feh...@t-online.de]

Sent: 13 August 2016 17:23
To: ceph-users
Subject: [ceph-users] CephFS quota

Hello,

I'm trying to use CephFS quaotas. On my client I've created a
subdirectory in my CephFS mountpoint and used the following command from
the documentation.

setfattr -n ceph.quota.max_bytes -v 1 /mnt/cephfs/quota

But if I create files bigger than my quota nothing happens. Do I need a
mount option to use Quotas?

Regards - Willi

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS quota

2016-08-14 Thread Willi Fehler

Hello guys,

my cluster is running on the latest Ceph version. My cluster and my 
client are running on CentOS 7.2.


ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

My Client is using CephFS, I'm not using Fuse. My fstab:

linsrv001,linsrv002,linsrv003:/ /mnt/cephfs ceph 
noatime,dirstat,_netdev,name=cephfs,secretfile=/etc/ceph/cephfs.secret 0 0


Regards - Willi


Am 13.08.16 um 13:58 schrieb Goncalo Borges:

Hi Willi
If you are using ceph-fuse, to enable quota, you need to pass "--client-quota" 
option in the mount operation.
Cheers
Goncalo


From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Willi Fehler 
[willi.feh...@t-online.de]
Sent: 13 August 2016 17:23
To: ceph-users
Subject: [ceph-users] CephFS quota

Hello,

I'm trying to use CephFS quaotas. On my client I've created a
subdirectory in my CephFS mountpoint and used the following command from
the documentation.

setfattr -n ceph.quota.max_bytes -v 1 /mnt/cephfs/quota

But if I create files bigger than my quota nothing happens. Do I need a
mount option to use Quotas?

Regards - Willi

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS quota

2016-08-13 Thread Willi Fehler

Hello,

I'm trying to use CephFS quaotas. On my client I've created a 
subdirectory in my CephFS mountpoint and used the following command from 
the documentation.


setfattr -n ceph.quota.max_bytes -v 1 /mnt/cephfs/quota

But if I create files bigger than my quota nothing happens. Do I need a 
mount option to use Quotas?


Regards - Willi

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler

Hello Sean,

great. Thank you for your feedback.
Have a nice sunday.

Regards - Willi

Am 03.07.16 um 10:00 schrieb Sean Redmond:


Hi,

You will need 2 mons to be online.

Thanks

On 3 Jul 2016 8:58 a.m., "Willi Fehler" <willi.feh...@t-online.de 
<mailto:willi.feh...@t-online.de>> wrote:


Hello Tu,

yes that's correct. The mon nodes run as well on the OSD nodes. So
I have

3 nodes in total. OSD, MDS and Mon on each Node.

Regards - Willi

Am 03.07.16 um 09:56 schrieb Tu Holmes:


Where are your mon nodes?

Were you mixing mon and OSD together?

Are 2 of the mon nodes down as well?

On Jul 3, 2016 12:53 AM, "Willi Fehler" <willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de>> wrote:

Hello Sean,

I've powered down 2 nodes. So 6 of 9 OSD are down. But my
client can't write and read anymore from my Ceph mount. Also
'ceph -s' hangs.

pool 1 'cephfs_data' replicated size 3 min_size 1
crush_ruleset 0 object_hash rjenkins pg_num 300 pgp_num 300
last_change 447 flags hashpspool crash_replay_interval 45
stripe_width 0
pool 2 'cephfs_metadata' replicated size 3 min_size 1
crush_ruleset 0 object_hash rjenkins pg_num 300 pgp_num 300
last_change 445 flags hashpspool stripe_width 0

2016-07-03 09:49:40.695953 7f3da56f9700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.7:6789/0 <http://192.168.0.7:6789/0>
pipe(0x7f3da0001f50 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f3daf20).fault
2016-07-03 09:49:44.195029 7f3da57fa700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.6:6789/0 <http://192.168.0.6:6789/0>
pipe(0x7f3da0005500 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7f3da00067c0).fault
2016-07-03 09:49:50.205788 7f3da55f8700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.6:6789/0 <http://192.168.0.6:6789/0>
pipe(0x7f3da0005500 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f3da0004c40).fault
2016-07-03 09:49:52.720116 7f3da57fa700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.7:6789/0 <http://192.168.0.7:6789/0>
pipe(0x7f3da00023f0 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7f3da00036b0).fault

Regards - Willi

Am 03.07.16 um 09:36 schrieb Sean Redmond:


It would need to be set to 1

On 3 Jul 2016 8:17 a.m., "Willi Fehler"
<willi.feh...@t-online.de <mailto:willi.feh...@t-online.de>>
wrote:

Hello David,

so in a 3 node Cluster how should I set min_size if I
want that 2 nodes could fail?

Regards - Willi

Am 28.06.16 um 13:07 schrieb David:

Hi,

This is probably the min_size on your cephfs data
and/or metadata pool. I believe the default is 2, if
you have less than 2 replicas available I/O will stop.
See:

http://docs.ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas

On Tue, Jun 28, 2016 at 10:23 AM,
willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de>
<willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de>> wrote:

Hello,

I'm still very new to Ceph. I've created a small
test Cluster.

ceph-node1

osd0

osd1

osd2

ceph-node2

osd3

osd4

osd5

ceph-node3

osd6

osd7

osd8

My pool for CephFS has a replication count of 3.
I've powered of 2 nodes(6 OSDs went down) and my
cluster status became critical and my ceph
clients(cephfs) run into a timeout. My data(I had
only one file on my pool) was still on one of the
active OSDs. Is this the expected behaviour that
the Cluster status became critical and my Clients
run into a timeout?

Many thanks for your feedback.

Regards - Willi



___
ceph-users mailing list
ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@list

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler

Hello Tu,

yes that's correct. The mon nodes run as well on the OSD nodes. So I have

3 nodes in total. OSD, MDS and Mon on each Node.

Regards - Willi

Am 03.07.16 um 09:56 schrieb Tu Holmes:


Where are your mon nodes?

Were you mixing mon and OSD together?

Are 2 of the mon nodes down as well?

On Jul 3, 2016 12:53 AM, "Willi Fehler" <willi.feh...@t-online.de 
<mailto:willi.feh...@t-online.de>> wrote:


Hello Sean,

I've powered down 2 nodes. So 6 of 9 OSD are down. But my client
can't write and read anymore from my Ceph mount. Also 'ceph -s' hangs.

pool 1 'cephfs_data' replicated size 3 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 300 pgp_num 300 last_change 447 flags
hashpspool crash_replay_interval 45 stripe_width 0
pool 2 'cephfs_metadata' replicated size 3 min_size 1
crush_ruleset 0 object_hash rjenkins pg_num 300 pgp_num 300
last_change 445 flags hashpspool stripe_width 0

2016-07-03 09:49:40.695953 7f3da56f9700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.7:6789/0 <http://192.168.0.7:6789/0> pipe(0x7f3da0001f50
sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3daf20).fault
2016-07-03 09:49:44.195029 7f3da57fa700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.6:6789/0 <http://192.168.0.6:6789/0> pipe(0x7f3da0005500
sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3da00067c0).fault
2016-07-03 09:49:50.205788 7f3da55f8700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.6:6789/0 <http://192.168.0.6:6789/0> pipe(0x7f3da0005500
sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3da0004c40).fault
2016-07-03 09:49:52.720116 7f3da57fa700  0 --
192.168.0.5:0/2773396901 <http://192.168.0.5:0/2773396901> >>
192.168.0.7:6789/0 <http://192.168.0.7:6789/0> pipe(0x7f3da00023f0
sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f3da00036b0).fault

Regards - Willi

Am 03.07.16 um 09:36 schrieb Sean Redmond:


It would need to be set to 1

On 3 Jul 2016 8:17 a.m., "Willi Fehler" <willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de>> wrote:

Hello David,

so in a 3 node Cluster how should I set min_size if I want
that 2 nodes could fail?

Regards - Willi

Am 28.06.16 um 13:07 schrieb David:

Hi,

This is probably the min_size on your cephfs data and/or
metadata pool. I believe the default is 2, if you have less
than 2 replicas available I/O will stop. See:

http://docs.ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas

On Tue, Jun 28, 2016 at 10:23 AM, willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de> <willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de>> wrote:

Hello,

I'm still very new to Ceph. I've created a small test
Cluster.

ceph-node1

osd0

osd1

osd2

ceph-node2

osd3

osd4

osd5

ceph-node3

osd6

osd7

osd8

My pool for CephFS has a replication count of 3. I've
powered of 2 nodes(6 OSDs went down) and my cluster
status became critical and my ceph clients(cephfs) run
into a timeout. My data(I had only one file on my pool)
was still on one of the active OSDs. Is this the
expected behaviour that the Cluster status became
critical and my Clients run into a timeout?

Many thanks for your feedback.

Regards - Willi



___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler

Hello Sean,

I've powered down 2 nodes. So 6 of 9 OSD are down. But my client can't 
write and read anymore from my Ceph mount. Also 'ceph -s' hangs.


pool 1 'cephfs_data' replicated size 3 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 300 pgp_num 300 last_change 447 flags 
hashpspool crash_replay_interval 45 stripe_width 0
pool 2 'cephfs_metadata' replicated size 3 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 300 pgp_num 300 last_change 445 flags 
hashpspool stripe_width 0


2016-07-03 09:49:40.695953 7f3da56f9700  0 -- 192.168.0.5:0/2773396901 
>> 192.168.0.7:6789/0 pipe(0x7f3da0001f50 sd=3 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f3daf20).fault
2016-07-03 09:49:44.195029 7f3da57fa700  0 -- 192.168.0.5:0/2773396901 
>> 192.168.0.6:6789/0 pipe(0x7f3da0005500 sd=4 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f3da00067c0).fault
2016-07-03 09:49:50.205788 7f3da55f8700  0 -- 192.168.0.5:0/2773396901 
>> 192.168.0.6:6789/0 pipe(0x7f3da0005500 sd=3 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f3da0004c40).fault
2016-07-03 09:49:52.720116 7f3da57fa700  0 -- 192.168.0.5:0/2773396901 
>> 192.168.0.7:6789/0 pipe(0x7f3da00023f0 sd=4 :0 s=1 pgs=0 cs=0 l=1 
c=0x7f3da00036b0).fault


Regards - Willi

Am 03.07.16 um 09:36 schrieb Sean Redmond:


It would need to be set to 1

On 3 Jul 2016 8:17 a.m., "Willi Fehler" <willi.feh...@t-online.de 
<mailto:willi.feh...@t-online.de>> wrote:


Hello David,

so in a 3 node Cluster how should I set min_size if I want that 2
nodes could fail?

Regards - Willi

Am 28.06.16 um 13:07 schrieb David:

Hi,

This is probably the min_size on your cephfs data and/or metadata
pool. I believe the default is 2, if you have less than 2
replicas available I/O will stop. See:

http://docs.ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas

On Tue, Jun 28, 2016 at 10:23 AM, willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de> <willi.feh...@t-online.de
<mailto:willi.feh...@t-online.de>> wrote:

Hello,

I'm still very new to Ceph. I've created a small test Cluster.

ceph-node1

osd0

osd1

osd2

ceph-node2

osd3

osd4

osd5

ceph-node3

osd6

osd7

osd8

My pool for CephFS has a replication count of 3. I've powered
of 2 nodes(6 OSDs went down) and my cluster status became
critical and my ceph clients(cephfs) run into a timeout. My
data(I had only one file on my pool) was still on one of the
active OSDs. Is this the expected behaviour that the Cluster
status became critical and my Clients run into a timeout?

Many thanks for your feedback.

Regards - Willi



___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Willi Fehler

Hello David,

so in a 3 node Cluster how should I set min_size if I want that 2 nodes 
could fail?


Regards - Willi

Am 28.06.16 um 13:07 schrieb David:

Hi,

This is probably the min_size on your cephfs data and/or metadata 
pool. I believe the default is 2, if you have less than 2 replicas 
available I/O will stop. See: 
http://docs.ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas


On Tue, Jun 28, 2016 at 10:23 AM, willi.feh...@t-online.de 
 > wrote:


Hello,

I'm still very new to Ceph. I've created a small test Cluster.

ceph-node1

osd0

osd1

osd2

ceph-node2

osd3

osd4

osd5

ceph-node3

osd6

osd7

osd8

My pool for CephFS has a replication count of 3. I've powered of 2
nodes(6 OSDs went down) and my cluster status became critical and
my ceph clients(cephfs) run into a timeout. My data(I had only one
file on my pool) was still on one of the active OSDs. Is this the
expected behaviour that the Cluster status became critical and my
Clients run into a timeout?

Many thanks for your feedback.

Regards - Willi



___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs mount /etc/fstab

2016-06-26 Thread Willi Fehler

Hi Christian,

thank you. I found _netdev option by myself. I was a little bit confused 
because in the Ceph offical documentation there is no hint that you 
should use _netdev.


A last question. I have 3 nodes with 9 OSD:

[root@linsrv001 ~]# ceph osd tree
ID WEIGHT  TYPE NAME  UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.13129 root default
-2 0.04376 host linsrv002
 2 0.01459 osd.2   up  1.0  1.0
 3 0.01459 osd.3   up  1.0  1.0
 4 0.01459 osd.4   up  1.0  1.0
-3 0.04376 host linsrv003
 5 0.01459 osd.5   up  1.0  1.0
 6 0.01459 osd.6   up  1.0  1.0
 7 0.01459 osd.7   up  1.0  1.0
-4 0.04376 host linsrv001
 8 0.01459 osd.8   up  1.0  1.0
 9 0.01459 osd.9   up  1.0  1.0
10 0.01459 osd.10  up  1.0  1.0

My pool has configured a replicated size of 3. If I write a file with 
1GB. Does it mean the data is on 3 OSDs?


Have a nice sunday.

Regards - Willi

Am 26.06.16 um 10:30 schrieb Christian Balzer:

Hello,

On Sun, 26 Jun 2016 09:33:10 +0200 Willi Fehler wrote:


Hello,

I found an issue. I've added a ceph mount to my /etc/fstab. But when I
boot my system it hangs:

libceph: connect 192.168.0.5:6789 error -101

After the system is booted I can successfully run mount -a.


So what does that tell you?
That Ceph can't connect during boot, because... there's no network yet.

This is what the "_netdev" mount option is for.

Christian


Regards - Willi


Am 25.06.16 um 11:52 schrieb Willi Fehler:

Hello,

fixed it by myself. The secret was not in Base64. But I have a
question. Should I deploy my clients with ceph-deploy which will give
me a later version of libceph which also supports records
in /etc/hosts?

Regards - Willi

Am 25.06.16 um 10:42 schrieb Willi Fehler:

Hello,

I hope somebody could help. I'm trying to mount cephfs on CentOS 7 in
/etc/fstab.

192.168.0.5:/ /mnt/cephfs ceph
noatime,dirstat,name=admin,secret=mykey 0 0

Which is giving me.

[root@linsrv004 ~]# mount -a
mount: wrong fs type, bad option, bad superblock on 192.168.0.5:/,
missing codepage or helper program, or other error

In some cases useful info is found in syslog - try
dmesg | tail or so.

I haven't installed ceph or ceph-common.
The following command works:

mount -t ceph -o name=admin,secret=mykey
192.168.0.5,192.168.0.6,192.168.0.7:/ /mnt/cephfs

Many thanks & Regards,
Willi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs mount /etc/fstab

2016-06-26 Thread Willi Fehler

Hello,

I found an issue. I've added a ceph mount to my /etc/fstab. But when I 
boot my system it hangs:


libceph: connect 192.168.0.5:6789 error -101

After the system is booted I can successfully run mount -a.

Regards - Willi


Am 25.06.16 um 11:52 schrieb Willi Fehler:

Hello,

fixed it by myself. The secret was not in Base64. But I have a 
question. Should I deploy my clients with ceph-deploy which will give 
me a later version of libceph which also supports records in /etc/hosts?


Regards - Willi

Am 25.06.16 um 10:42 schrieb Willi Fehler:

Hello,

I hope somebody could help. I'm trying to mount cephfs on CentOS 7 in 
/etc/fstab.


192.168.0.5:/ /mnt/cephfs ceph 
noatime,dirstat,name=admin,secret=mykey 0 0


Which is giving me.

[root@linsrv004 ~]# mount -a
mount: wrong fs type, bad option, bad superblock on 192.168.0.5:/,
   missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

I haven't installed ceph or ceph-common.
The following command works:

mount -t ceph -o name=admin,secret=mykey 
192.168.0.5,192.168.0.6,192.168.0.7:/ /mnt/cephfs


Many thanks & Regards,
Willi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs mount /etc/fstab

2016-06-25 Thread Willi Fehler

Hello,

fixed it by myself. The secret was not in Base64. But I have a question. 
Should I deploy my clients with ceph-deploy which will give me a later 
version of libceph which also supports records in /etc/hosts?


Regards - Willi

Am 25.06.16 um 10:42 schrieb Willi Fehler:

Hello,

I hope somebody could help. I'm trying to mount cephfs on CentOS 7 in 
/etc/fstab.


192.168.0.5:/ /mnt/cephfs ceph noatime,dirstat,name=admin,secret=mykey 
0 0


Which is giving me.

[root@linsrv004 ~]# mount -a
mount: wrong fs type, bad option, bad superblock on 192.168.0.5:/,
   missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

I haven't installed ceph or ceph-common.
The following command works:

mount -t ceph -o name=admin,secret=mykey 
192.168.0.5,192.168.0.6,192.168.0.7:/ /mnt/cephfs


Many thanks & Regards,
Willi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs mount /etc/fstab

2016-06-25 Thread Willi Fehler

Hello,

I hope somebody could help. I'm trying to mount cephfs on CentOS 7 in 
/etc/fstab.


192.168.0.5:/ /mnt/cephfs ceph noatime,dirstat,name=admin,secret=mykey 0 0

Which is giving me.

[root@linsrv004 ~]# mount -a
mount: wrong fs type, bad option, bad superblock on 192.168.0.5:/,
   missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

I haven't installed ceph or ceph-common.
The following command works:

mount -t ceph -o name=admin,secret=mykey 
192.168.0.5,192.168.0.6,192.168.0.7:/ /mnt/cephfs


Many thanks & Regards,
Willi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] libceph dns resolution

2016-06-22 Thread Willi Fehler

Hello,

I'm trying to mount a ceph storage. It seems that libceph does not 
support records in /etc/hosts?


libceph: parse_ips bad ip 'linsrv001.willi-net.local'

Regards - Willi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitor ip address issue

2015-09-08 Thread Willi Fehler

Hi Chris,

thank you for your support. I will try to reconfigure my settings.

Regards - Willi

Am 08.09.15 um 08:43 schrieb Chris Taylor:

Willi,

Looking at your conf file a second time, it looks like you have the 
MONs on the same boxes as the OSDs. Is this correct? In my cluster the 
MONs are on separate boxes.


I'm making an assumption with your public_network, but  try changing your
mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
to
mon_host = 192.168.0.1,192.168.0.2,192.168.0.3

You might also need to change your hosts file to reflect the correct 
names and IP addresses also.




My ceph.conf:

[global]
fsid = d960d672-e035-413d-ba39-8341f4131760
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 10.20.0.11,10.20.0.12,10.20.0.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 10.20.0.0/24
cluster_network = 10.21.0.0/24

[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 1
filestore op threads = 2
osd op threads = 2

[client]
rbd cache = true
rbd cache writethrough until flush = true




On 09/07/2015 10:20 PM, Willi Fehler wrote:

Hi Chris,

could you please send me your ceph.conf? I tried to set "mon addr" 
but it looks like that it was ignored all the time.


Regards - Willi


Am 07.09.15 um 20:47 schrieb Chris Taylor:
My monitors are only connected to the public network, not the 
cluster network. Only the OSDs are connected to the cluster network.


Take a look at the diagram here:
http://ceph.com/docs/master/rados/configuration/network-config-ref/

-Chris

On 09/07/2015 03:15 AM, Willi Fehler wrote:

Hi,

any ideas?

Many thanks,
Willi

Am 07.09.15 um 08:59 schrieb Willi Fehler:

Hello,

I'm trying to setup my first Ceph Cluster on Hammer.

[root@linsrv002 ~]# ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

[root@linsrv002 ~]# ceph -s
cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
 health HEALTH_OK
 monmap e1: 3 mons at 
{linsrv001=10.10.10.1:6789/0,linsrv002=10.10.10.2:6789/0,linsrv003=10.10.10.3:6789/0}
election epoch 256, quorum 0,1,2 
linsrv001,linsrv002,linsrv003

 mdsmap e60: 1/1/1 up {0=linsrv001=up:active}, 2 up:standby
 osdmap e622: 9 osds: 9 up, 9 in
  pgmap v1216: 384 pgs, 3 pools, 2048 MB data, 532 objects
6571 MB used, 398 GB / 404 GB avail
 384 active+clean

My issue is that I have two networks a public network 
192.168.0.0/24 and a cluster network 10.10.10.0/24 and my monitors 
should listen on 192.168.0.0/24. Later I want to use CephFS over 
the public network.


[root@linsrv002 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_clock_drift_allowed = 1
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[root@linsrv002 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6

10.10.10.1linsrv001
10.10.10.2linsrv002
10.10.10.3linsrv003

I've deployed my first cluster with ceph-deploy. What should I do 
to have :6789 to be listen on the public network?


Regards - Willi





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitor ip address issue

2015-09-08 Thread Willi Fehler

Hi Chris,

I tried to reconfigure my cluster but my MONs are still using the wrong 
network. The new ceph.conf was pushed to all nodes and ceph was restarted.


[root@linsrv001 ~]# netstat -tulpen
Aktive Internetverbindungen (Nur Server)
Proto Recv-Q Send-Q Local Address   Foreign Address State   
Benutzer   Inode  PID/Program name
tcp0  0 10.10.10.1:6789 0.0.0.0:* LISTEN  
0  19969  1793/ceph-mon


[root@linsrv001 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6

192.168.0.5linsrv001
192.168.0.6linsrv002
192.168.0.7linsrv003

[root@linsrv001 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 192.168.0.5,192.168.0.6,192.168.0.7
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 1
filestore op threads = 2
osd op threads = 2

[client]
rbd cache = true
rbd cache writethrough until flush = true

Regards - Willi

Am 08.09.15 um 08:53 schrieb Willi Fehler:

Hi Chris,

thank you for your support. I will try to reconfigure my settings.

Regards - Willi

Am 08.09.15 um 08:43 schrieb Chris Taylor:

Willi,

Looking at your conf file a second time, it looks like you have the 
MONs on the same boxes as the OSDs. Is this correct? In my cluster 
the MONs are on separate boxes.


I'm making an assumption with your public_network, but  try changing your
mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
to
mon_host = 192.168.0.1,192.168.0.2,192.168.0.3

You might also need to change your hosts file to reflect the correct 
names and IP addresses also.




My ceph.conf:

[global]
fsid = d960d672-e035-413d-ba39-8341f4131760
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 10.20.0.11,10.20.0.12,10.20.0.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 10.20.0.0/24
cluster_network = 10.21.0.0/24

[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 1
filestore op threads = 2
osd op threads = 2

[client]
rbd cache = true
rbd cache writethrough until flush = true




On 09/07/2015 10:20 PM, Willi Fehler wrote:

Hi Chris,

could you please send me your ceph.conf? I tried to set "mon addr" 
but it looks like that it was ignored all the time.


Regards - Willi


Am 07.09.15 um 20:47 schrieb Chris Taylor:
My monitors are only connected to the public network, not the 
cluster network. Only the OSDs are connected to the cluster network.


Take a look at the diagram here:
http://ceph.com/docs/master/rados/configuration/network-config-ref/

-Chris

On 09/07/2015 03:15 AM, Willi Fehler wrote:

Hi,

any ideas?

Many thanks,
Willi

Am 07.09.15 um 08:59 schrieb Willi Fehler:

Hello,

I'm trying to setup my first Ceph Cluster on Hammer.

[root@linsrv002 ~]# ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

[root@linsrv002 ~]# ceph -s
cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
 health HEALTH_OK
 monmap e1: 3 mons at 
{linsrv001=10.10.10.1:6789/0,linsrv002=10.10.10.2:6789/0,linsrv003=10.10.10.3:6789/0}
election epoch 256, quorum 0,1,2 
linsrv001,linsrv002,linsrv003

 mdsmap e60: 1/1/1 up {0=linsrv001=up:active}, 2 up:standby
 osdmap e622: 9 osds: 9 up, 9 in
  pgmap v1216: 384 pgs, 3 pools, 2048 MB data, 532 objects
6571 MB used, 398 GB / 404 GB avail
 384 active+clean

My issue is that I have two networks a public network 
192.168.0.0/24 and a cluster network 10.10.10.0/24 and my 
monitors should listen on 192.168.0.0/24. Later I want to use 
CephFS over the public network.


[root@linsrv002 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_clock_drift_allowed = 1
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[root@linsrv002 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6

10.10.10.1linsrv001
10.10.10.2linsrv002
10.10.10.3linsrv003

I've deployed my first cluster

Re: [ceph-users] Ceph monitor ip address issue

2015-09-08 Thread Willi Fehler

Hi,

many thanks for your feedback. I've redeployed my cluster and now it was 
working. Last beginner question:


Replication size is by default now since a while to 3. When I set 
min_size to 1 it means that in a 3 node cluster 2 nodes(doesn't matter 
which of them) could crash and I have still a working cluster?


Regards - Willi

Am 08.09.15 um 10:23 schrieb Joao Eduardo Luis:

On 09/08/2015 08:13 AM, Willi Fehler wrote:

Hi Chris,

I tried to reconfigure my cluster but my MONs are still using the wrong
network. The new ceph.conf was pushed to all nodes and ceph was restarted.

If your monitors are already deployed, you will need to move them to the
new network manually. Once deployed, the monitors no longer care for
ceph.conf for their addresses, but will use the monmap instead - only
clients will look into ceph.conf to figure out where the monitors are.

You will need to follow the procedure to add/rm monitors [1].

HTH.

   -Joao

[1] http://ceph.com/docs/master/rados/operations/add-or-rm-mons/




[root@linsrv001 ~]# netstat -tulpen
Aktive Internetverbindungen (Nur Server)
Proto Recv-Q Send-Q Local Address   Foreign Address
State   Benutzer   Inode  PID/Program name
tcp0  0 10.10.10.1:6789 0.0.0.0:*
LISTEN  0  19969  1793/ceph-mon

[root@linsrv001 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
192.168.0.5linsrv001
192.168.0.6linsrv002
192.168.0.7linsrv003

[root@linsrv001 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 192.168.0.5,192.168.0.6,192.168.0.7
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 1
filestore op threads = 2
osd op threads = 2

[client]
rbd cache = true
rbd cache writethrough until flush = true

Regards - Willi

Am 08.09.15 um 08:53 schrieb Willi Fehler:

Hi Chris,

thank you for your support. I will try to reconfigure my settings.

Regards - Willi

Am 08.09.15 um 08:43 schrieb Chris Taylor:

Willi,

Looking at your conf file a second time, it looks like you have the
MONs on the same boxes as the OSDs. Is this correct? In my cluster
the MONs are on separate boxes.

I'm making an assumption with your public_network, but  try changing your
 mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
to
 mon_host = 192.168.0.1,192.168.0.2,192.168.0.3

You might also need to change your hosts file to reflect the correct
names and IP addresses also.



My ceph.conf:

[global]
fsid = d960d672-e035-413d-ba39-8341f4131760
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 10.20.0.11,10.20.0.12,10.20.0.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 10.20.0.0/24
cluster_network = 10.21.0.0/24

[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 1
filestore op threads = 2
osd op threads = 2

[client]
rbd cache = true
rbd cache writethrough until flush = true




On 09/07/2015 10:20 PM, Willi Fehler wrote:

Hi Chris,

could you please send me your ceph.conf? I tried to set "mon addr"
but it looks like that it was ignored all the time.

Regards - Willi


Am 07.09.15 um 20:47 schrieb Chris Taylor:

My monitors are only connected to the public network, not the
cluster network. Only the OSDs are connected to the cluster network.

Take a look at the diagram here:
http://ceph.com/docs/master/rados/configuration/network-config-ref/

-Chris

On 09/07/2015 03:15 AM, Willi Fehler wrote:

Hi,

any ideas?

Many thanks,
Willi

Am 07.09.15 um 08:59 schrieb Willi Fehler:

Hello,

I'm trying to setup my first Ceph Cluster on Hammer.

[root@linsrv002 ~]# ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

[root@linsrv002 ~]# ceph -s
 cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
  health HEALTH_OK
  monmap e1: 3 mons at
{linsrv001=10.10.10.1:6789/0,linsrv002=10.10.10.2:6789/0,linsrv003=10.10.10.3:6789/0}
 election epoch 256, quorum 0,1,2
linsrv001,linsrv002,linsrv003
  mdsmap e60: 1/1/1 up {0=linsrv001=up:active}, 2 up:standby
  osdmap e622: 9 osds: 9 up, 9 in
   pgmap v1216: 384 pgs, 3 pools, 2048 MB data, 532 objects
 6571 MB used, 398 GB / 404 GB avail
  384 active+clean

My issue is that I have two networks a public network
192.168.0.0/24 and

[ceph-users] Ceph monitor ip address issue

2015-09-07 Thread Willi Fehler

Hello,

I'm trying to setup my first Ceph Cluster on Hammer.

[root@linsrv002 ~]# ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

[root@linsrv002 ~]# ceph -s
cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
 health HEALTH_OK
 monmap e1: 3 mons at 
{linsrv001=10.10.10.1:6789/0,linsrv002=10.10.10.2:6789/0,linsrv003=10.10.10.3:6789/0}

election epoch 256, quorum 0,1,2 linsrv001,linsrv002,linsrv003
 mdsmap e60: 1/1/1 up {0=linsrv001=up:active}, 2 up:standby
 osdmap e622: 9 osds: 9 up, 9 in
  pgmap v1216: 384 pgs, 3 pools, 2048 MB data, 532 objects
6571 MB used, 398 GB / 404 GB avail
 384 active+clean

My issue is that I have two networks a public network 192.168.0.0/24 and 
a cluster network 10.10.10.0/24 and my monitors should listen on 
192.168.0.0/24. Later I want to use CephFS over the public network.


[root@linsrv002 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_clock_drift_allowed = 1
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[root@linsrv002 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6

10.10.10.1linsrv001
10.10.10.2linsrv002
10.10.10.3linsrv003

I've deployed my first cluster with ceph-deploy. What should I do to 
have :6789 to be listen on the public network?


Regards - Willi

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitor ip address issue

2015-09-07 Thread Willi Fehler

Hi Chris,

could you please send me your ceph.conf? I tried to set "mon addr" but 
it looks like that it was ignored all the time.


Regards - Willi


Am 07.09.15 um 20:47 schrieb Chris Taylor:
My monitors are only connected to the public network, not the cluster 
network. Only the OSDs are connected to the cluster network.


Take a look at the diagram here:
http://ceph.com/docs/master/rados/configuration/network-config-ref/

-Chris

On 09/07/2015 03:15 AM, Willi Fehler wrote:

Hi,

any ideas?

Many thanks,
Willi

Am 07.09.15 um 08:59 schrieb Willi Fehler:

Hello,

I'm trying to setup my first Ceph Cluster on Hammer.

[root@linsrv002 ~]# ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

[root@linsrv002 ~]# ceph -s
cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
 health HEALTH_OK
 monmap e1: 3 mons at 
{linsrv001=10.10.10.1:6789/0,linsrv002=10.10.10.2:6789/0,linsrv003=10.10.10.3:6789/0}
election epoch 256, quorum 0,1,2 
linsrv001,linsrv002,linsrv003

 mdsmap e60: 1/1/1 up {0=linsrv001=up:active}, 2 up:standby
 osdmap e622: 9 osds: 9 up, 9 in
  pgmap v1216: 384 pgs, 3 pools, 2048 MB data, 532 objects
6571 MB used, 398 GB / 404 GB avail
 384 active+clean

My issue is that I have two networks a public network 192.168.0.0/24 
and a cluster network 10.10.10.0/24 and my monitors should listen on 
192.168.0.0/24. Later I want to use CephFS over the public network.


[root@linsrv002 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
mon_initial_members = linsrv001, linsrv002, linsrv003
mon_host = 10.10.10.1,10.10.10.2,10.10.10.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_clock_drift_allowed = 1
public_network = 192.168.0.0/24
cluster_network = 10.10.10.0/24

[root@linsrv002 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6

10.10.10.1linsrv001
10.10.10.2linsrv002
10.10.10.3linsrv003

I've deployed my first cluster with ceph-deploy. What should I do to 
have :6789 to be listen on the public network?


Regards - Willi





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com