Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Marc Schöchlin
Hi Mike,

Am 22.07.19 um 16:48 schrieb Mike Christie:
> On 07/22/2019 06:00 AM, Marc Schöchlin wrote:
>>> With older kernels no timeout would be set for each command by default,
>>> so if you were not running that tool then you would not see the nbd
>>> disconnect+io_errors+xfs issue. You would just see slow IOs.
>>>
>>> With newer kernels, like 4.15, nbd.ko always sets a per command timeout
>>> even if you do not set it via a nbd ioctl/netlink command. By default
>>> the timeout is 30 seconds. After the timeout period then the kernel does
>>> that disconnect+IO_errors error handling which causes xfs to get errors.
>>>
>> Did i get you correctly: Setting a unlimited timeout should prevent crashes 
>> on kernel 4.15?
> It looks like with newer kernels there is no way to turn it off.
>
> You can set it really high. There is no max check and so it depends on
> various calculations and what some C types can hold and how your kernel
> is compiled. You should be able to set the timer to an hour.

Okay, i already experimented with high timeouts (i.e 600 seconds). As i can 
remember this leaded to pretty unusable system if i put high amounts of io on 
the ec volume.
This system also runs als krbd volume which saturates the system with ~30-60% 
iowait - this volume never had a problem.

A comment writer in https://tracker.ceph.com/issues/40822#change-141205 
suggests me to reduce the rbd cache.
What do you think about that?

>
>> For testing purposes i set the timeout to unlimited ("nbd_set_ioctl 
>> /dev/nbd0 0", on already mounted device).
>> I re-executed the problem procedure and discovered that the 
>> compression-procedure crashes not at the same file, but crashes 30 seconds 
>> later with the same crash behavior.
>>
> 0 will cause the default timeout of 30 secs to be used.

Okay, then the usage description of 
https://github.com/OnApp/nbd-kernel_mod/blob/master/nbd_set_timeout.c not seems 
to be correct :-)

Regards
Marc

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Iscsi in the nautilus Dashboard

2019-07-22 Thread Kaspar Bosma

Hi Brent,As far as I know version 3.0 (which I assume is version 9) is the minimum required for the dashboard.I would go with the latest from Shaman; it won't break the actual iSCSI part of the setup, only maybe the iSCSI support in the dashboard. I haven't tried it myself, I'm still at version 2.7 (which is version 8 I would gather...)KasparOp 23 juli 2019 om 4:49 schreef Brent Kennedy :  I posted to the ceph-iscsi github but Dillaman noted that 3.2 was version 10.  Which means that wouldn’t solve the issue with the version 9 requirement of the current 14.2.2 nautilus.   Paul noted 3.1 is “pretty broken”, soo which version is version 9?  Or should I hack/patch the dashboard in 14.2.2 to accept version 10 or 8?    -Brent    From: Kaspar Bosma  Sent: Monday, July 22, 2019 8:11 AMTo: Paul Emmerich ; Brent Kennedy Cc: ceph-users Subject: Re: [ceph-users] Iscsi in the nautilus Dashboard  Hi all,That was not the most recent. This is it (3.2.4): https://2.chacra.ceph.com/r/ceph-iscsi/master/8a3967698257e1b49a9d554847b84418c15da902/centos/7/flavors/default/KasparOp 22 juli 2019 om 14:01 schreef Kaspar Bosma :Hi Brent,You may want to have a look at the repos at shaman.ceph.com.The latest (3.2.2) packaged version of Ceph iSCSI is located here:https://4.chacra.ceph.com/r/ceph-iscsi/master/ff5e6873c43ab6828d3f7264526100b95a7e3954/centos/7/flavors/default/noarch/You can also find related package repos for the tcmu-runner and ceph-iscsi-cli projects.Regards, KasparOp 22 juli 2019 om 12:52 schreef Paul Emmerich :Version 9 is the fqdn stuff which was introduced in 3.1.Use 3.2 as 3.1 is pretty broken. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90  On Mon, Jul 22, 2019 at 3:24 AM Brent Kennedy < bkenn...@cfl.rr.com> wrote:I have a test cluster running centos 7.6 setup with two iscsi gateways ( per the requirement ).  I have the dashboard setup in nautilus ( 14.2.2 ) and I added the iscsi gateways via the command.  Both show down and when I go to the dashboard it states: “ Unsupported `ceph-iscsi` config version. Expected 9 but found 8.  “ Both iscsi gateways were setup from scratch since the latest and greatest packages required for ceph iscsi install are not available in the centos repositories.  Is 3.0 not considered version 9?  ( did I do something wrong? ) Why is it called/detected as version 8 when its version 3? I also wondering, the package versions listed as required in the nautilus docs(http://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/)  state x.x.x or NEWER package, but when I try to add a gateway gwcli complains about the tcmu-runner and targetcli versions and I have to use theSkipchecks=true option when adding them.  Another thing came up, might be doing it wrong as well: Added a disk, then added the client, then tried to add the auth using the auth command and it states: “Failed to update the client's auth: Invalid password” Actual output:/iscsi-target...erpix:backup1> auth username=test password=testCMD: ../hosts/ auth *username=test, password=test, mutual_username=None, mutual_password=NoneCMD: ../hosts/ auth *auth to be set to username='test', password='test', mutual_username='None', mutual_password='None' for 'iqn.2019-07.com.somgthing:backup1'Failed to update the client's auth: Invalid username Did I miss something in the setup doc? Installed packages:rtslib:  wget https://github.com/open-iscsi/rtslib-fb/archive/v2.1.fb69.tar.gztarget-cli: wget https://github.com/open-iscsi/targetcli-fb/archive/v2.1.fb49.tar.gztcmu-runner: wget https://github.com/open-iscsi/tcmu-runner/archive/v1.4.1.tar.gzceph-iscsi: wget https://github.com/ceph/ceph-iscsi/archive/3.0.tar.gzconfigshell: wget https://github.com/open-iscsi/configshell-fb/archive/v1.1.fb25.tar.gz Other bits I installed as part of this:yum install epel-release python-pip python-devel -yyum groupinstall "Development Tools" -ypython -m pip install --upgrade pip setuptools wheelpip install netifaces cryptography flask  Any helps or pointer would be greatly appreciated! -Brent Existing Clusters:Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi gateways ( all virtual on nvme )US Production(HDD): Nautilus 14.2.1 with 11 osd servers, 3 mons, 4 gateways behind haproxy LBUK Production(HDD): Luminous 12.2.11 with 25 osd servers, 3 mons/man, 3 gateways behind haproxy LBUS Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3 gateways behind haproxy LB  ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com 

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Marc Schöchlin
Hi Mike,

Am 22.07.19 um 17:01 schrieb Mike Christie:
> On 07/19/2019 02:42 AM, Marc Schöchlin wrote:
>> We have ~500 heavy load rbd-nbd devices in our xen cluster (rbd-nbd 12.2.5, 
>> kernel 4.4.0+10, centos clone) and ~20 high load krbd devices (kernel 
>> 4.15.0-45, ubuntu 16.04) - we never experienced problems like this.
> For this setup, do you have 257 or more rbd-nbd devices running on a
> single system?
No, these rbd-nbds are distributed over more than a dozen of xen dom-0 systems 
on our xenservers.
> If so then you are hitting another bug where newer kernels only support
> 256 devices. It looks like a regression was added when mq and netlink
> support was added upstream. You can create more then 256 devices, but
> some devices will not be able to execute any IO. Commands sent to the
> rbd-nbd device are going to always timeout and you will see the errors
> in your log.
>
> I am testing some patches for that right now.

From my point of view there is no limitation besides io from ceph cluster 
perspective.

Regards
Marc

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Iscsi in the nautilus Dashboard

2019-07-22 Thread Brent Kennedy
I posted to the ceph-iscsi github but Dillaman noted that 3.2 was version 10.  
Which means that wouldn’t solve the issue with the version 9 requirement of the 
current 14.2.2 nautilus.   Paul noted 3.1 is “pretty broken”, soo which version 
is version 9?  Or should I hack/patch the dashboard in 14.2.2 to accept version 
10 or 8?

 

-Brent

 

From: Kaspar Bosma  
Sent: Monday, July 22, 2019 8:11 AM
To: Paul Emmerich ; Brent Kennedy 
Cc: ceph-users 
Subject: Re: [ceph-users] Iscsi in the nautilus Dashboard

 

Hi all,

That was not the most recent. This is it (3.2.4): 
https://2.chacra.ceph.com/r/ceph-iscsi/master/8a3967698257e1b49a9d554847b84418c15da902/centos/7/flavors/default/

Kaspar

Op 22 juli 2019 om 14:01 schreef Kaspar Bosma mailto:kaspar.bo...@home.nl> >: 

Hi Brent,

You may want to have a look at the repos at shaman.ceph.com.

The latest (3.2.2) packaged version of Ceph iSCSI is located here:

https://4.chacra.ceph.com/r/ceph-iscsi/master/ff5e6873c43ab6828d3f7264526100b95a7e3954/centos/7/flavors/default/noarch/

You can also find related package repos for the tcmu-runner and ceph-iscsi-cli 
projects.

Regards, Kaspar

Op 22 juli 2019 om 12:52 schreef Paul Emmerich mailto:paul.emmer...@croit.io> >: 

Version 9 is the fqdn stuff which was introduced in 3.1.

Use 3.2 as 3.1 is pretty broken. 

 

Paul

 

-- 
Paul Emmerich 

Looking for help with your Ceph cluster? Contact us at https://croit.io 

croit GmbH 
Freseniusstr. 31h 
81247 München 
www.croit.io   
Tel: +49 89 1896585 90

 

 

On Mon, Jul 22, 2019 at 3:24 AM Brent Kennedy < bkenn...@cfl.rr.com 
 > wrote: 

I have a test cluster running centos 7.6 setup with two iscsi gateways ( per 
the requirement ).  I have the dashboard setup in nautilus ( 14.2.2 ) and I 
added the iscsi gateways via the command.  Both show down and when I go to the 
dashboard it states:

 

“ Unsupported `ceph-iscsi` config version. Expected 9 but found 8.  “

 

Both iscsi gateways were setup from scratch since the latest and greatest 
packages required for ceph iscsi install are not available in the centos 
repositories.  Is 3.0 not considered version 9?  ( did I do something wrong? ) 
Why is it called/detected as version 8 when its version 3?

 

I also wondering, the package versions listed as required in the nautilus 
docs(http://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/)  state x.x.x or 
NEWER package, but when I try to add a gateway gwcli complains about the 
tcmu-runner and targetcli versions and I have to use the 

Skipchecks=true option when adding them.  

 

Another thing came up, might be doing it wrong as well:  

Added a disk, then added the client, then tried to add the auth using the auth 
command and it states: “Failed to update the client's auth: Invalid password”

 

Actual output:

/iscsi-target...erpix:backup1> auth username=test password=test

CMD: ../hosts/ auth *

username=test, password=test, mutual_username=None, mutual_password=None

CMD: ../hosts/ auth *

auth to be set to username='test', password='test', mutual_username='None', 
mutual_password='None' for 'iqn.2019-07.com.somgthing:backup1'

Failed to update the client's auth: Invalid username

 

Did I miss something in the setup doc?

 

Installed packages:

rtslib:  wget https://github.com/open-iscsi/rtslib-fb/archive/v2.1.fb69.tar.gz

target-cli: wget 
https://github.com/open-iscsi/targetcli-fb/archive/v2.1.fb49.tar.gz 

tcmu-runner: wget 
https://github.com/open-iscsi/tcmu-runner/archive/v1.4.1.tar.gz 

ceph-iscsi: wget https://github.com/ceph/ceph-iscsi/archive/3.0.tar.gz 

configshell: wget 
https://github.com/open-iscsi/configshell-fb/archive/v1.1.fb25.tar.gz 

 

Other bits I installed as part of this:

yum install epel-release python-pip python-devel -y

yum groupinstall "Development Tools" -y

python -m pip install --upgrade pip setuptools wheel

pip install netifaces cryptography flask

 

 

Any helps or pointer would be greatly appreciated!

 

-Brent

 

Existing Clusters:

Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi 
gateways ( all virtual on nvme )

US Production(HDD): Nautilus 14.2.1 with 11 osd servers, 3 mons, 4 gateways 
behind haproxy LB

UK Production(HDD): Luminous 12.2.11 with 25 osd servers, 3 mons/man, 3 
gateways behind haproxy LB

US Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3 gateways 
behind haproxy LB

 

 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com   
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com   
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


 

___ ceph-users mailing list 
ceph-users@lists.ceph.com   

[ceph-users] Mark CephFS inode as lost

2019-07-22 Thread Robert LeBlanc
We have a Luminous cluster which has filled up to 100% multiple times and
this causes an inode to be left in a bad state. Doing anything to these
files causes the client to hang which requires evicting the client and
failing over the MDS. Usually we move the parent directory out of the way
and things mostly are okay. However in this last fill up, we have a
significant amount of storage that we have moved out of the way and really
need to reclaim that space. I can't delete the files around it as listing
the directory causes a hang.

We can get the inode that is bad from the logs/blocked_ops, how can we tell
MDS that the inode is lost and to forget about it without trying to do any
checks on it (checking the RADOS objects may be part of the problem)? Once
the inode is out of CephFS, we can clean up the RADOS objects manually or
leave them there to rot.

Thanks,
Robert LeBlanc

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MON crashing when upgrading from Hammer to Luminous

2019-07-22 Thread JC Lopez
First link should be this one 
http://docs.ceph.com/docs/jewel/install/upgrading-ceph/#upgrade-procedures 
 
rather than 
http://docs.ceph.com/docs/mimic/install/upgrading-ceph/#upgrade-procedures 
 to 
be consistent.

JC

> On Jul 22, 2019, at 13:38, JC Lopez  wrote:
> 
> Hi 
> 
> you’ll have to go from Hammer to Jewel then from Jewel to Luminous for a 
> smooth upgrade.
> - http://docs.ceph.com/docs/mimic/install/upgrading-ceph/#upgrade-procedures 
> 
> - 
> http://docs.ceph.com/docs/luminous/release-notes/#upgrading-from-pre-jewel-releases-like-hammer
>  
> 
> 
> Make sure to check any special upgrade requirement from the release notes.
> - 
> http://docs.ceph.com/docs/jewel/release-notes/#upgrading-from-infernalis-or-hammer
>  
> 
> - 
> http://docs.ceph.com/docs/luminous/release-notes/#upgrade-from-jewel-or-kraken
>  
> 
> 
> Regards
> JC
> 
>> On Jul 22, 2019, at 12:20, Armin Ranjbar > > wrote:
>> 
>> Dear Everyone,
>> 
>> First of all, guys, seriously, Thank you for Ceph.
>> 
>> now to the problem, upgrading ceph from 0.94.6 
>> (e832001feaf8c176593e0325c8298e3f16dfb403) to 12.2.12-218-g9fd889f 
>> (9fd889fe09c652512ca78854702d5ad9bf3059bb), ceph-mon seems unable to upgrade 
>> it's database, problem is gone if i --force-sync.
>> 
>> This is the message:
>> terminate called after throwing an instance of 
>> 'ceph::buffer::malformed_input'
>>   what():  buffer::malformed_input: void 
>> object_stat_sum_t::decode(ceph::buffer::list::iterator&) decode past end of 
>> struct encoding
>> *** Caught signal (Aborted) **
>> 
>> attached is full log, the output of:
>> ceph-mon --debug_mon 100 -i node-1 -d
>> 
>> ---
>> Armin ranjbar
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MON crashing when upgrading from Hammer to Luminous

2019-07-22 Thread JC Lopez
Hi 

you’ll have to go from Hammer to Jewel then from Jewel to Luminous for a smooth 
upgrade.
- http://docs.ceph.com/docs/mimic/install/upgrading-ceph/#upgrade-procedures 

- 
http://docs.ceph.com/docs/luminous/release-notes/#upgrading-from-pre-jewel-releases-like-hammer
 


Make sure to check any special upgrade requirement from the release notes.
- 
http://docs.ceph.com/docs/jewel/release-notes/#upgrading-from-infernalis-or-hammer
 

- 
http://docs.ceph.com/docs/luminous/release-notes/#upgrade-from-jewel-or-kraken 


Regards
JC

> On Jul 22, 2019, at 12:20, Armin Ranjbar  wrote:
> 
> Dear Everyone,
> 
> First of all, guys, seriously, Thank you for Ceph.
> 
> now to the problem, upgrading ceph from 0.94.6 
> (e832001feaf8c176593e0325c8298e3f16dfb403) to 12.2.12-218-g9fd889f 
> (9fd889fe09c652512ca78854702d5ad9bf3059bb), ceph-mon seems unable to upgrade 
> it's database, problem is gone if i --force-sync.
> 
> This is the message:
> terminate called after throwing an instance of 'ceph::buffer::malformed_input'
>   what():  buffer::malformed_input: void 
> object_stat_sum_t::decode(ceph::buffer::list::iterator&) decode past end of 
> struct encoding
> *** Caught signal (Aborted) **
> 
> attached is full log, the output of:
> ceph-mon --debug_mon 100 -i node-1 -d
> 
> ---
> Armin ranjbar
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Failed to get omap key when mirroring of image is enabled

2019-07-22 Thread Jason Dillaman
On Mon, Jul 22, 2019 at 3:26 PM Ajitha Robert  wrote:
>
> Thanks for your reply
>
> 1) In scenario 1, I didnt attempt to delete the cinder volume. Please find 
> the cinder volume log.
> http://paste.openstack.org/show/754731/

It might be better to ping Cinder folks about that one. It doesn't
really make sense to me from a quick glance.

>
> 2) In scenario 2. I will try with debug. But i m having a test setup with one 
> OSD in primary and one OSD in secondary. distance between two ceph clusters 
> is 300 km
>
>
> 3)I have disabled ceph authentication totally for all including rbd-mirror 
> daemon. Also i have deployed the ceph cluster using ceph-ansible. Will these 
> both  create any issue to the entire setup

Not to my knowledge.

> 4)The image which was in syncing mode, showed read only status in secondary.

Mirrored images are either primary or non-primary. It is the expected
(documented) behaviour that non-primary images are read-only.

> 5)In a presentation i found as journaling feature is causing poor performance 
> in IO operations and we can skip the journaling process for mirroring... Is 
> it possible.. By enabling mirroring to entire cinder pool as pool mode 
> instead of mirror mode of rbd mirroring.. And we can skip the 
> replication_enabled is true spec in cinder type..

Journaling is required for RBD mirroring.

>
>
>
> On Mon, Jul 22, 2019 at 11:13 PM Jason Dillaman  wrote:
>>
>> On Mon, Jul 22, 2019 at 10:49 AM Ajitha Robert  
>> wrote:
>> >
>> > No error log in rbd-mirroring except some connection timeout came once,
>> > Scenario 1:
>> >   when I create a bootable volume of 100 GB with a glance image.Image get 
>> > downloaded and from cinder, volume log throws with "volume is busy 
>> > deleting volume that has snapshot" . Image was enabled with exclusive 
>> > lock, journaling, layering, object-map, fast-diff and deep-flatten
>> > Cinder volume is in error state but the rbd image is created in primary 
>> > but not in secondary.
>>
>> Any chance you know where in Cinder that error is being thrown? A
>> quick grep of the code doesn't reveal that error message. If the image
>> is being synced to the secondary site when you attempt to delete it,
>> it's possible you could hit this issue. Providing debug log messages
>> from librbd on the Cinder controller might also be helpful for this.
>>
>> > Scenario 2:
>> > but when i create a 50gb volume with another glance image. Volume  get 
>> > created. and in the backend i could see the rbd images both in primary and 
>> > secondary
>> >
>> > From rbd mirror image status i found secondary cluster starts copying , 
>> > and syncing was struck at around 14 %... It will be in 14 % .. no progress 
>> > at all. should I set any parameters for this like timeout??
>> >
>> > I manually checked rbd --cluster primary object-map check ..  
>> > No results came for the objects and the command was in hanging.. Thats why 
>> > got worried on the failed to map object key log. I couldnt even rebuild 
>> > the object map.
>>
>> It sounds like one or more of your primary OSDs are not reachable from
>> the secondary site. If you run w/ "debug rbd-mirror = 20" and "debug
>> rbd = 20", you should be able to see the last object it attempted to
>> copy. From that, you could use "ceph osd map" to figure out the
>> primary OSD for that object.
>>
>> > the image which was in syncing mode, showed read only status in secondary.
>> >
>> >
>> >
>> > On Mon, 22 Jul 2019, 17:36 Jason Dillaman,  wrote:
>> >>
>> >> On Sun, Jul 21, 2019 at 8:25 PM Ajitha Robert  
>> >> wrote:
>> >> >
>> >> >  I have a rbd mirroring setup with primary and secondary clusters as 
>> >> > peers and I have a pool enabled image mode.., In this i created a rbd 
>> >> > image , enabled with journaling.
>> >> >
>> >> > But whenever i enable mirroring on the image,  I m getting error in 
>> >> > osd.log. I couldnt trace it out. please guide me to solve this error.
>> >> >
>> >> > I think initially it worked fine. but after ceph process restart. these 
>> >> > error coming
>> >> >
>> >> >
>> >> > Secondary.osd.0.log
>> >> >
>> >> > 2019-07-22 05:36:17.371771 7ffbaa0e9700  0  
>> >> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get 
>> >> > omap key: client_a5c76849-ba16-480a-a96b-ebfdb7f6ac65
>> >> > 2019-07-22 05:36:17.388552 7ffbaa0e9700  0  
>> >> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object 
>> >> > set earlier than minimum: 0 < 1
>> >> > 2019-07-22 05:36:17.413102 7ffbaa0e9700  0  
>> >> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get 
>> >> > omap key: order
>> >> > 2019-07-22 05:36:23.341490 7ffbab8ec700  0  
>> >> > /build/ceph-12.2.12/src/cls/rbd/cls_rbd.cc:4125: error retrieving image 
>> >> > id for global id '9e36b9f8-238e-4a54-a055-19b19447855e': (2) No such 
>> >> > file or directory
>> >> >
>> >> >
>> >> > primary-osd.0.log
>> >> >
>> >> > 2019-07-22 05:16:49.287769 7fae12db1700  0 log_channel(cluster) log 
>> >> > 

Re: [ceph-users] MON / MDS Storage Location

2019-07-22 Thread Jack
Hi,

mon: /var/lib/ceph/mon/*
mds: inside the cephfs_data and cephfs_metadata rados pools


On 07/22/2019 09:25 PM, dhils...@performair.com wrote:
> All;
> 
> Where, in the filesystem, do MONs and MDSs store their data?
> 
> Thank you,
> 
> Dominic L. Hilsbos, MBA 
> Director - Information Technology 
> Perform Air International Inc.
> dhils...@performair.com 
> www.PerformAir.com
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] MON / MDS Storage Location

2019-07-22 Thread DHilsbos
All;

Where, in the filesystem, do MONs and MDSs store their data?

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] MON crashing when upgrading from Hammer to Luminous

2019-07-22 Thread Armin Ranjbar
Dear Everyone,

First of all, guys, seriously, Thank you for Ceph.

now to the problem, upgrading ceph from 0.94.6
(e832001feaf8c176593e0325c8298e3f16dfb403) to 12.2.12-218-g9fd889f
(9fd889fe09c652512ca78854702d5ad9bf3059bb), ceph-mon seems unable to
upgrade it's database, problem is gone if i --force-sync.

This is the message:
terminate called after throwing an instance of
'ceph::buffer::malformed_input'
  what():  buffer::malformed_input: void
object_stat_sum_t::decode(ceph::buffer::list::iterator&) decode past end of
struct encoding
*** Caught signal (Aborted) **

attached is full log, the output of:
ceph-mon --debug_mon 100 -i node-1 -d

---
Armin ranjbar
2019-07-22 19:17:54.429120 7f2064488f40  0 ceph version 12.2.12-218-g9fd889f 
(9fd889fe09c652512ca78854702d5ad9bf3059bb) luminous (stable), process ceph-mon, 
pid 908122
2019-07-22 19:17:54.429229 7f2064488f40  0 pidfile_write: ignore empty 
--pid-file
2019-07-22 19:17:54.438472 7f2064488f40  0 load: jerasure load: lrc load: isa 
2019-07-22 19:17:54.438908 7f2064488f40  1 leveldb: Recovering log #4402802
2019-07-22 19:17:54.489204 7f2064488f40  1 leveldb: Delete type=0 #4402802

2019-07-22 19:17:54.489263 7f2064488f40  1 leveldb: Delete type=3 #4402801

2019-07-22 19:17:54.489547 7f2064488f40 10 obtain_monmap
terminate called after throwing an instance of 'ceph::buffer::malformed_input'
  what():  buffer::malformed_input: void 
object_stat_sum_t::decode(ceph::buffer::list::iterator&) decode past end of 
struct encoding
*** Caught signal (Aborted) **
 in thread 7f2064488f40 thread_name:ceph-mon
2019-07-22 19:17:54.489654 7f2064488f40 10 obtain_monmap read last committed 
monmap ver 3
2019-07-22 19:17:54.490558 7f2064488f40  0 starting mon.node-1 rank 2 at public 
addr 192.168.1.16:6789/0 at bind addr 192.168.1.16:6789/0 mon_data 
/var/lib/ceph/mon/ceph-node-1 fsid cf635990-70fa-43ed-978d-96f92f9ccc92
2019-07-22 19:17:54.490737 7f2064488f40  0 starting mon.node-1 rank 2 at 
192.168.1.16:6789/0 mon_data /var/lib/ceph/mon/ceph-node-1 fsid 
cf635990-70fa-43ed-978d-96f92f9ccc92
2019-07-22 19:17:54.491279 7f2064488f40  1 mon.node-1@-1(probing) e3 preinit 
fsid cf635990-70fa-43ed-978d-96f92f9ccc92
2019-07-22 19:17:54.491351 7f2064488f40 10 mon.node-1@-1(probing) e3 check_fsid 
cluster_uuid contains 'cf635990-70fa-43ed-978d-96f92f9ccc92'
2019-07-22 19:17:54.491363 7f2064488f40 10 mon.node-1@-1(probing) e3 features 
compat={},rocompat={},incompat={1=initial feature set (~v.18),3=single paxos 
with k/v store (v0.?),4=support erasure code pools,5=new-style osdmap 
encoding,6=support isa/lrc erasure code}
2019-07-22 19:17:54.491371 7f2064488f40 10 mon.node-1@-1(probing) e3 
calc_quorum_requirements required_features 18416819765248
2019-07-22 19:17:54.491374 7f2064488f40 10 mon.node-1@-1(probing) e3 
required_features 18416819765248
2019-07-22 19:17:54.491381 7f2064488f40 10 mon.node-1@-1(probing) e3 
has_ever_joined = 1
2019-07-22 19:17:54.491411 7f2064488f40 10 mon.node-1@-1(probing) e3 
sync_last_committed_floor 0
2019-07-22 19:17:54.491413 7f2064488f40 10 mon.node-1@-1(probing) e3 init_paxos
2019-07-22 19:17:54.491516 7f2064488f40  1 mon.node-1@-1(probing).mds e0 Unable 
to load 'last_metadata'
2019-07-22 19:17:54.491558 7f2064488f40 10 mon.node-1@-1(probing).health init
2019-07-22 19:17:54.491574 7f2064488f40 10 mon.node-1@-1(probing) e3 
refresh_from_paxos
2019-07-22 19:17:54.491608 7f2064488f40  1 
mon.node-1@-1(probing).paxosservice(pgmap 21727587..21728259) refresh upgraded, 
format 0 -> 1
2019-07-22 19:17:54.491612 7f2064488f40  1 mon.node-1@-1(probing).pg v0 
on_upgrade discarding in-core PGMap
2019-07-22 19:17:54.491635 7f2064488f40 10 mon.node-1@-1(probing).pg v0 
update_from_paxos v0, read_full
2019-07-22 19:17:54.491638 7f2064488f40 10 mon.node-1@-1(probing).pg v0 
read_pgmap_meta
 ceph version 12.2.12-218-g9fd889f (9fd889fe09c652512ca78854702d5ad9bf3059bb) 
luminous (stable)
 1: (()+0x96b249) [0x7f2063e73249]
 2: (()+0x10330) [0x7f20628a0330]
 3: (gsignal()+0x37) [0x7f2060e8bc37]
 4: (abort()+0x148) [0x7f2060e8f028]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7f206179a535]
 6: (()+0x5e6d6) [0x7f20617986d6]
 7: (()+0x5e703) [0x7f2061798703]
 8: (()+0x5e922) [0x7f2061798922]
 9: (object_stat_sum_t::decode(ceph::buffer::list::iterator&)+0x650) 
[0x7f2063c81be0]
 10: (object_stat_collection_t::decode(ceph::buffer::list::iterator&)+0x4f) 
[0x7f2063c9627f]
 11: (pg_stat_t::decode(ceph::buffer::list::iterator&)+0x1d5) [0x7f2063c96965]
 12: (PGMap::update_pg(pg_t, ceph::buffer::list&)+0xf4) [0x7f20639d93b4]
 13: (PGMonitor::read_pgmap_full()+0x161) [0x7f20639a8a81]
 14: (PGMonitor::update_from_paxos(bool*)+0x699) [0x7f20639b0479]
 15: (PaxosService::refresh(bool*)+0x1a3) [0x7f2063a55103]
 16: (Monitor::refresh_from_paxos(bool*)+0x183) [0x7f206390cd53]
 17: (Monitor::init_paxos()+0xfd) [0x7f206390d12d]
 18: (Monitor::preinit()+0xa7e) [0x7f206390dbee]
 19: (main()+0x3bf4) [0x7f206383cde4]
 20: (__libc_start_main()+0xf5) [0x7f2060e76f45]
 21: 

Re: [ceph-users] Failed to get omap key when mirroring of image is enabled

2019-07-22 Thread Jason Dillaman
On Mon, Jul 22, 2019 at 10:49 AM Ajitha Robert  wrote:
>
> No error log in rbd-mirroring except some connection timeout came once,
> Scenario 1:
>   when I create a bootable volume of 100 GB with a glance image.Image get 
> downloaded and from cinder, volume log throws with "volume is busy deleting 
> volume that has snapshot" . Image was enabled with exclusive lock, 
> journaling, layering, object-map, fast-diff and deep-flatten
> Cinder volume is in error state but the rbd image is created in primary but 
> not in secondary.

Any chance you know where in Cinder that error is being thrown? A
quick grep of the code doesn't reveal that error message. If the image
is being synced to the secondary site when you attempt to delete it,
it's possible you could hit this issue. Providing debug log messages
from librbd on the Cinder controller might also be helpful for this.

> Scenario 2:
> but when i create a 50gb volume with another glance image. Volume  get 
> created. and in the backend i could see the rbd images both in primary and 
> secondary
>
> From rbd mirror image status i found secondary cluster starts copying , and 
> syncing was struck at around 14 %... It will be in 14 % .. no progress at 
> all. should I set any parameters for this like timeout??
>
> I manually checked rbd --cluster primary object-map check ..  No 
> results came for the objects and the command was in hanging.. Thats why got 
> worried on the failed to map object key log. I couldnt even rebuild the 
> object map.

It sounds like one or more of your primary OSDs are not reachable from
the secondary site. If you run w/ "debug rbd-mirror = 20" and "debug
rbd = 20", you should be able to see the last object it attempted to
copy. From that, you could use "ceph osd map" to figure out the
primary OSD for that object.

> the image which was in syncing mode, showed read only status in secondary.
>
>
>
> On Mon, 22 Jul 2019, 17:36 Jason Dillaman,  wrote:
>>
>> On Sun, Jul 21, 2019 at 8:25 PM Ajitha Robert  
>> wrote:
>> >
>> >  I have a rbd mirroring setup with primary and secondary clusters as peers 
>> > and I have a pool enabled image mode.., In this i created a rbd image , 
>> > enabled with journaling.
>> >
>> > But whenever i enable mirroring on the image,  I m getting error in 
>> > osd.log. I couldnt trace it out. please guide me to solve this error.
>> >
>> > I think initially it worked fine. but after ceph process restart. these 
>> > error coming
>> >
>> >
>> > Secondary.osd.0.log
>> >
>> > 2019-07-22 05:36:17.371771 7ffbaa0e9700  0  
>> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get omap 
>> > key: client_a5c76849-ba16-480a-a96b-ebfdb7f6ac65
>> > 2019-07-22 05:36:17.388552 7ffbaa0e9700  0  
>> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set 
>> > earlier than minimum: 0 < 1
>> > 2019-07-22 05:36:17.413102 7ffbaa0e9700  0  
>> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get omap 
>> > key: order
>> > 2019-07-22 05:36:23.341490 7ffbab8ec700  0  
>> > /build/ceph-12.2.12/src/cls/rbd/cls_rbd.cc:4125: error retrieving image id 
>> > for global id '9e36b9f8-238e-4a54-a055-19b19447855e': (2) No such file or 
>> > directory
>> >
>> >
>> > primary-osd.0.log
>> >
>> > 2019-07-22 05:16:49.287769 7fae12db1700  0 log_channel(cluster) log [DBG] 
>> > : 1.b deep-scrub ok
>> > 2019-07-22 05:16:54.078698 7fae125b0700  0 log_channel(cluster) log [DBG] 
>> > : 1.1b scrub starts
>> > 2019-07-22 05:16:54.293839 7fae125b0700  0 log_channel(cluster) log [DBG] 
>> > : 1.1b scrub ok
>> > 2019-07-22 05:17:04.055277 7fae12db1700  0  
>> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set 
>> > earlier than minimum: 0 < 1
>> >
>> > 2019-07-22 05:33:21.540986 7fae135b2700  0  
>> > /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set 
>> > earlier than minimum: 0 < 1
>> > 2019-07-22 05:35:27.447820 7fae12db1700  0  
>> > /build/ceph-12.2.12/src/cls/rbd/cls_rbd.cc:4125: error retrieving image id 
>> > for global id '8a61f694-f650-4ba1-b768-c5e7629ad2e0': (2) No such file or 
>> > directory
>>
>> Those don't look like errors, but the log level should probably be
>> reduced for those OSD cls methods. If you look at your rbd-mirror
>> daemon log, do you see any errors? That would be the important place
>> to look.
>>
>> >
>> > --
>> > Regards,
>> > Ajitha R
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> Jason



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] which tool to use for benchmarking rgw s3, yscb or cosbench

2019-07-22 Thread Mark Lehrer
I have had good luck with YCSB as an initial assessment of different
storage systems.  Typically I'll use this first when I am playing with
a new system, but I like to switch to the more native tools (rados
bench, cassandra-stress, etc etc) as soon as I am more comfortable.

And I can definitely second what mnelson said about Zipfian... it is
great for trying to get a more real-world test but if you're not
careful you'll end up benchmarking your cache instead.

Mark

On Sun, Jul 21, 2019 at 9:52 AM Wei Zhao  wrote:
>
> Hi:
>   I found cosbench is a very convenient tool for benchmaring rgw. But
> when I read papers ,  I found YCSB tool,
> https://github.com/brianfrankcooper/YCSB/tree/master/s3  . It seems
> that this is used for test cloud service , and seems a right tool for
> our service . Has  anyone tried this tool ?How is it  compared to
> cosbench ?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Mike Christie
On 07/19/2019 02:42 AM, Marc Schöchlin wrote:
> We have ~500 heavy load rbd-nbd devices in our xen cluster (rbd-nbd 12.2.5, 
> kernel 4.4.0+10, centos clone) and ~20 high load krbd devices (kernel 
> 4.15.0-45, ubuntu 16.04) - we never experienced problems like this.

For this setup, do you have 257 or more rbd-nbd devices running on a
single system?

If so then you are hitting another bug where newer kernels only support
256 devices. It looks like a regression was added when mq and netlink
support was added upstream. You can create more then 256 devices, but
some devices will not be able to execute any IO. Commands sent to the
rbd-nbd device are going to always timeout and you will see the errors
in your log.

I am testing some patches for that right now.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Mike Christie
On 07/22/2019 06:00 AM, Marc Schöchlin wrote:
>> With older kernels no timeout would be set for each command by default,
>> so if you were not running that tool then you would not see the nbd
>> disconnect+io_errors+xfs issue. You would just see slow IOs.
>>
>> With newer kernels, like 4.15, nbd.ko always sets a per command timeout
>> even if you do not set it via a nbd ioctl/netlink command. By default
>> the timeout is 30 seconds. After the timeout period then the kernel does
>> that disconnect+IO_errors error handling which causes xfs to get errors.
>>
> Did i get you correctly: Setting a unlimited timeout should prevent crashes 
> on kernel 4.15?

It looks like with newer kernels there is no way to turn it off.

You can set it really high. There is no max check and so it depends on
various calculations and what some C types can hold and how your kernel
is compiled. You should be able to set the timer to an hour.

> 
> For testing purposes i set the timeout to unlimited ("nbd_set_ioctl /dev/nbd0 
> 0", on already mounted device).
> I re-executed the problem procedure and discovered that the 
> compression-procedure crashes not at the same file, but crashes 30 seconds 
> later with the same crash behavior.
> 

0 will cause the default timeout of 30 secs to be used.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Iscsi in the nautilus Dashboard

2019-07-22 Thread Kaspar Bosma

Hi all,That was not the most recent. This is it (3.2.4): https://2.chacra.ceph.com/r/ceph-iscsi/master/8a3967698257e1b49a9d554847b84418c15da902/centos/7/flavors/default/KasparOp 22 juli 2019 om 14:01 schreef Kaspar Bosma :  Hi Brent,You may want to have a look at the repos at shaman.ceph.com.The latest (3.2.2) packaged version of Ceph iSCSI is located here:https://4.chacra.ceph.com/r/ceph-iscsi/master/ff5e6873c43ab6828d3f7264526100b95a7e3954/centos/7/flavors/default/noarch/You can also find related package repos for the tcmu-runner and ceph-iscsi-cli projects.Regards, KasparOp 22 juli 2019 om 12:52 schreef Paul Emmerich :  Version 9 is the fqdn stuff which was introduced in 3.1.Use 3.2 as 3.1 is pretty broken. Paul-- Paul Emmerich  Looking for help with your Ceph cluster? Contact us at https://croit.io  croit GmbH Freseniusstr. 31h 81247 München  www.croit.io Tel: +49 89 1896585 90On Mon, Jul 22, 2019 at 3:24 AM Brent Kennedy < bkenn...@cfl.rr.com> wrote: I have a test cluster running centos 7.6 setup with two iscsi gateways ( per the requirement ).  I have the dashboard setup in nautilus ( 14.2.2 ) and I added the iscsi gateways via the command.  Both show down and when I go to the dashboard it states: “ Unsupported `ceph-iscsi` config version. Expected 9 but found 8.  “ Both iscsi gateways were setup from scratch since the latest and greatest packages required for ceph iscsi install are not available in the centos repositories.  Is 3.0 not considered version 9?  ( did I do something wrong? ) Why is it called/detected as version 8 when its version 3? I also wondering, the package versions listed as required in the nautilus docs(http://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/)  state x.x.x or NEWER package, but when I try to add a gateway gwcli complains about the tcmu-runner and targetcli versions and I have to use the Skipchecks=true option when adding them.   Another thing came up, might be doing it wrong as well:  Added a disk, then added the client, then tried to add the auth using the auth command and it states: “Failed to update the client's auth: Invalid password” Actual output:/iscsi-target...erpix:backup1> auth username=test password=testCMD: ../hosts/ auth *username=test, password=test, mutual_username=None, mutual_password=NoneCMD: ../hosts/ auth *auth to be set to username='test', password='test', mutual_username='None', mutual_password='None' for 'iqn.2019-07.com.somgthing:backup1'Failed to update the client's auth: Invalid username Did I miss something in the setup doc? Installed packages:rtslib:  wget https://github.com/open-iscsi/rtslib-fb/archive/v2.1.fb69.tar.gztarget-cli: wget https://github.com/open-iscsi/targetcli-fb/archive/v2.1.fb49.tar.gz tcmu-runner: wget https://github.com/open-iscsi/tcmu-runner/archive/v1.4.1.tar.gz ceph-iscsi: wget https://github.com/ceph/ceph-iscsi/archive/3.0.tar.gz configshell: wget https://github.com/open-iscsi/configshell-fb/archive/v1.1.fb25.tar.gz  Other bits I installed as part of this:yum install epel-release python-pip python-devel -yyum groupinstall "Development Tools" -ypython -m pip install --upgrade pip setuptools wheelpip install netifaces cryptography flask  Any helps or pointer would be greatly appreciated! -Brent Existing Clusters:Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi gateways ( all virtual on nvme )US Production(HDD): Nautilus 14.2.1 with 11 osd servers, 3 mons, 4 gateways behind haproxy LBUK Production(HDD): Luminous 12.2.11 with 25 osd servers, 3 mons/man, 3 gateways behind haproxy LBUS Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3 gateways behind haproxy LB  ___  ceph-users mailing list  ceph-users@lists.ceph.com  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Failed to get omap key when mirroring of image is enabled

2019-07-22 Thread Jason Dillaman
On Sun, Jul 21, 2019 at 8:25 PM Ajitha Robert  wrote:
>
>  I have a rbd mirroring setup with primary and secondary clusters as peers 
> and I have a pool enabled image mode.., In this i created a rbd image , 
> enabled with journaling.
>
> But whenever i enable mirroring on the image,  I m getting error in osd.log. 
> I couldnt trace it out. please guide me to solve this error.
>
> I think initially it worked fine. but after ceph process restart. these error 
> coming
>
>
> Secondary.osd.0.log
>
> 2019-07-22 05:36:17.371771 7ffbaa0e9700  0  
> /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get omap 
> key: client_a5c76849-ba16-480a-a96b-ebfdb7f6ac65
> 2019-07-22 05:36:17.388552 7ffbaa0e9700  0  
> /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set 
> earlier than minimum: 0 < 1
> 2019-07-22 05:36:17.413102 7ffbaa0e9700  0  
> /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get omap 
> key: order
> 2019-07-22 05:36:23.341490 7ffbab8ec700  0  
> /build/ceph-12.2.12/src/cls/rbd/cls_rbd.cc:4125: error retrieving image id 
> for global id '9e36b9f8-238e-4a54-a055-19b19447855e': (2) No such file or 
> directory
>
>
> primary-osd.0.log
>
> 2019-07-22 05:16:49.287769 7fae12db1700  0 log_channel(cluster) log [DBG] : 
> 1.b deep-scrub ok
> 2019-07-22 05:16:54.078698 7fae125b0700  0 log_channel(cluster) log [DBG] : 
> 1.1b scrub starts
> 2019-07-22 05:16:54.293839 7fae125b0700  0 log_channel(cluster) log [DBG] : 
> 1.1b scrub ok
> 2019-07-22 05:17:04.055277 7fae12db1700  0  
> /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set 
> earlier than minimum: 0 < 1
>
> 2019-07-22 05:33:21.540986 7fae135b2700  0  
> /build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set 
> earlier than minimum: 0 < 1
> 2019-07-22 05:35:27.447820 7fae12db1700  0  
> /build/ceph-12.2.12/src/cls/rbd/cls_rbd.cc:4125: error retrieving image id 
> for global id '8a61f694-f650-4ba1-b768-c5e7629ad2e0': (2) No such file or 
> directory

Those don't look like errors, but the log level should probably be
reduced for those OSD cls methods. If you look at your rbd-mirror
daemon log, do you see any errors? That would be the important place
to look.

>
> --
> Regards,
> Ajitha R
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Iscsi in the nautilus Dashboard

2019-07-22 Thread Kaspar Bosma

Hi Brent,You may want to have a look at the repos at shaman.ceph.com.The latest (3.2.2) packaged version of Ceph iSCSI is located here:https://4.chacra.ceph.com/r/ceph-iscsi/master/ff5e6873c43ab6828d3f7264526100b95a7e3954/centos/7/flavors/default/noarch/You can also find related package repos for the tcmu-runner and ceph-iscsi-cli projects.Regards, KasparOp 22 juli 2019 om 12:52 schreef Paul Emmerich :  Version 9 is the fqdn stuff which was introduced in 3.1.Use 3.2 as 3.1 is pretty broken. Paul-- Paul Emmerich  Looking for help with your Ceph cluster? Contact us at https://croit.io  croit GmbH Freseniusstr. 31h 81247 München  www.croit.io Tel: +49 89 1896585 90On Mon, Jul 22, 2019 at 3:24 AM Brent Kennedy < bkenn...@cfl.rr.com> wrote: I have a test cluster running centos 7.6 setup with two iscsi gateways ( per the requirement ).  I have the dashboard setup in nautilus ( 14.2.2 ) and I added the iscsi gateways via the command.  Both show down and when I go to the dashboard it states: “ Unsupported `ceph-iscsi` config version. Expected 9 but found 8.  “ Both iscsi gateways were setup from scratch since the latest and greatest packages required for ceph iscsi install are not available in the centos repositories.  Is 3.0 not considered version 9?  ( did I do something wrong? ) Why is it called/detected as version 8 when its version 3? I also wondering, the package versions listed as required in the nautilus docs(http://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/)  state x.x.x or NEWER package, but when I try to add a gateway gwcli complains about the tcmu-runner and targetcli versions and I have to use the Skipchecks=true option when adding them.   Another thing came up, might be doing it wrong as well:  Added a disk, then added the client, then tried to add the auth using the auth command and it states: “Failed to update the client's auth: Invalid password” Actual output:/iscsi-target...erpix:backup1> auth username=test password=testCMD: ../hosts/ auth *username=test, password=test, mutual_username=None, mutual_password=NoneCMD: ../hosts/ auth *auth to be set to username='test', password='test', mutual_username='None', mutual_password='None' for 'iqn.2019-07.com.somgthing:backup1'Failed to update the client's auth: Invalid username Did I miss something in the setup doc? Installed packages:rtslib:  wget https://github.com/open-iscsi/rtslib-fb/archive/v2.1.fb69.tar.gztarget-cli: wget https://github.com/open-iscsi/targetcli-fb/archive/v2.1.fb49.tar.gz tcmu-runner: wget https://github.com/open-iscsi/tcmu-runner/archive/v1.4.1.tar.gz ceph-iscsi: wget https://github.com/ceph/ceph-iscsi/archive/3.0.tar.gz configshell: wget https://github.com/open-iscsi/configshell-fb/archive/v1.1.fb25.tar.gz  Other bits I installed as part of this:yum install epel-release python-pip python-devel -yyum groupinstall "Development Tools" -ypython -m pip install --upgrade pip setuptools wheelpip install netifaces cryptography flask  Any helps or pointer would be greatly appreciated! -Brent Existing Clusters:Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi gateways ( all virtual on nvme )US Production(HDD): Nautilus 14.2.1 with 11 osd servers, 3 mons, 4 gateways behind haproxy LBUK Production(HDD): Luminous 12.2.11 with 25 osd servers, 3 mons/man, 3 gateways behind haproxy LBUS Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3 gateways behind haproxy LB  ___  ceph-users mailing list  ceph-users@lists.ceph.com  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New best practices for osds???

2019-07-22 Thread Vitaliy Filippov
OK, I meant "it may help performance" :) the main point is that we had at  
least one case of data loss due to some Adaptec controller in RAID0 mode  
discussed recently in our ceph chat...


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Marc Schöchlin
Hello Mike,

i attached inline comments.

Am 19.07.19 um 22:20 schrieb Mike Christie:
>
>> We have ~500 heavy load rbd-nbd devices in our xen cluster (rbd-nbd 12.2.5, 
>> kernel 4.4.0+10, centos clone) and ~20 high load krbd devices (kernel 
>> 4.15.0-45, ubuntu 16.04) - we never experienced problems like this.
>> We only experience problems like this with rbd-nbd > 12.2.5 on ubuntu 16.04 
>> (kernel 4.15) or ubuntu 18.04 (kernel 4.15) with erasure encoding or without.
>>
> Are you only using the nbd_set_timeout tool for this newer kernel combo
> to try and workaround the disconnect+io_errors problem in newer kernels,
> or did you use that tool to set a timeout with older kernels? I am just
> trying to clarify the problem, because the kernel changed behavior and I
> am not sure if your issue is the very slow IO or that the kernel now
> escalates its error handler by default.
I only use nbd_set_timeout with the 4.15 kernels on obuntu 16.04 and 18.04 
because we experienced problems some weeks ago on "fstrim" activities a few 
weeks ago.
Adding timeouts of 60 seconds seemed to help, but did not solve the problem 
completely.

The problem situation described in my request is a different distuation but 
seems to be sourced in the same rootcause.

Not using the nbd_set_timeout tool, results in the same but more prominent 
problem situations :-)
(test with unloading the nbd module and re-executing the test)
>
> With older kernels no timeout would be set for each command by default,
> so if you were not running that tool then you would not see the nbd
> disconnect+io_errors+xfs issue. You would just see slow IOs.
>
> With newer kernels, like 4.15, nbd.ko always sets a per command timeout
> even if you do not set it via a nbd ioctl/netlink command. By default
> the timeout is 30 seconds. After the timeout period then the kernel does
> that disconnect+IO_errors error handling which causes xfs to get errors.
>
Did i get you correctly: Setting a unlimited timeout should prevent crashes on 
kernel 4.15?

For testing purposes i set the timeout to unlimited ("nbd_set_ioctl /dev/nbd0 
0", on already mounted device).
I re-executed the problem procedure and discovered that the 
compression-procedure crashes not at the same file, but crashes 30 seconds 
later with the same crash behavior.

Regards
Marc


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New best practices for osds???

2019-07-22 Thread Paul Emmerich
On Mon, Jul 22, 2019 at 12:52 PM Vitaliy Filippov 
wrote:

> It helps performance,


Not necessarily, I've seen several setups where disabling the cache
increases performance


Paul


> but it can also lead to data loss if the raid
> controller is crap (not flushing data correctly)
>
> --
> With best regards,
>Vitaliy Filippov
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Iscsi in the nautilus Dashboard

2019-07-22 Thread Paul Emmerich
Version 9 is the fqdn stuff which was introduced in 3.1.
Use 3.2 as 3.1 is pretty broken.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Mon, Jul 22, 2019 at 3:24 AM Brent Kennedy  wrote:

> I have a test cluster running centos 7.6 setup with two iscsi gateways (
> per the requirement ).  I have the dashboard setup in nautilus ( 14.2.2 )
> and I added the iscsi gateways via the command.  Both show down and when I
> go to the dashboard it states:
>
>
>
> “ Unsupported `ceph-iscsi` config version. Expected 9 but found 8.  “
>
>
>
> Both iscsi gateways were setup from scratch since the latest and greatest
> packages required for ceph iscsi install are not available in the centos
> repositories.  Is 3.0 not considered version 9?  ( did I do something
> wrong? ) Why is it called/detected as version 8 when its version 3?
>
>
>
> I also wondering, the package versions listed as required in the nautilus
> docs(http://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/)  state
> x.x.x or NEWER package, but when I try to add a gateway gwcli complains
> about the tcmu-runner and targetcli versions and I have to use the
>
> Skipchecks=true option when adding them.
>
>
>
> Another thing came up, might be doing it wrong as well:
>
> Added a disk, then added the client, then tried to add the auth using the
> auth command and it states: “Failed to update the client's auth: Invalid
> password”
>
>
>
> Actual output:
>
> /iscsi-target...erpix:backup1> auth username=test password=test
>
> CMD: ../hosts/ auth *
>
> username=test, password=test, mutual_username=None, mutual_password=None
>
> CMD: ../hosts/ auth *
>
> auth to be set to username='test', password='test',
> mutual_username='None', mutual_password='None' for
> 'iqn.2019-07.com.somgthing:backup1'
>
> Failed to update the client's auth: Invalid username
>
>
>
> Did I miss something in the setup doc?
>
>
>
> Installed packages:
>
> rtslib:  wget
> https://github.com/open-iscsi/rtslib-fb/archive/v2.1.fb69.tar.gz
>
> target-cli: wget
> https://github.com/open-iscsi/targetcli-fb/archive/v2.1.fb49.tar.gz
>
> tcmu-runner: wget
> https://github.com/open-iscsi/tcmu-runner/archive/v1.4.1.tar.gz
>
> ceph-iscsi: wget https://github.com/ceph/ceph-iscsi/archive/3.0.tar.gz
>
> configshell: wget
> https://github.com/open-iscsi/configshell-fb/archive/v1.1.fb25.tar.gz
>
>
>
> Other bits I installed as part of this:
>
> yum install epel-release python-pip python-devel -y
>
> yum groupinstall "Development Tools" -y
>
> python -m pip install --upgrade pip setuptools wheel
>
> pip install netifaces cryptography flask
>
>
>
>
>
> Any helps or pointer would be greatly appreciated!
>
>
>
> -Brent
>
>
>
> Existing Clusters:
>
> Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi
> gateways ( all virtual on nvme )
>
> US Production(HDD): Nautilus 14.2.1 with 11 osd servers, 3 mons, 4
> gateways behind haproxy LB
>
> UK Production(HDD): Luminous 12.2.11 with 25 osd servers, 3 mons/man, 3
> gateways behind haproxy LB
>
> US Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3
> gateways behind haproxy LB
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New best practices for osds???

2019-07-22 Thread Vitaliy Filippov
It helps performance, but it can also lead to data loss if the raid  
controller is crap (not flushing data correctly)


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Vitaliy Filippov

Linear reads, `hdparm -t /dev/vda`.


Check if you have `cache=writeback` enabled in your VM options.

If it's enabled but you still get 5mb/s then try to benchmark your cluster  
with fio -ioengine=rbd from outside a VM.


Like

fio -ioengine=rbd -name=test -bs=4M -iodepth=16 -rw=read -pool=rpool  
-runtime=60 -rbdname=testimg


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Stuart Longland
On 22/7/19 7:39 pm, Vitaliy Filippov wrote:
> 5MB/s in what mode?

Linear reads, `hdparm -t /dev/vda`.

> For linear writes, that definitely means some kind of misconfiguration.
> For random writes... there's a handbrake in Bluestore which makes random
> writes run at half speed in HDD-only setups :)
> https://github.com/ceph/ceph/pull/26909

Sounds like BlueStore may be worth a look once I move to Ceph v14 some
time in the future, which is eventually on my TO-DO list.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Stuart Longland
On 22/7/19 7:13 pm, Marc Roos wrote:
> 
>  >> Reverting back to filestore is quite a lot of work and time again. 
>  >> Maybe see first if with some tuning of the vms you can get better 
> results?
>  >
>  >None of the VMs are particularly disk-intensive.  There's two users 
> accessing the system over a WiFi network for email, and some HTTP/SMTP 
> traffic coming in via an ADSL2 Internet connection.
>  >
>  >If Bluestore can't manage this, then I'd consider it totally worthless 
> in any enterprise installation -- so clearly something is wrong.
> 
> I have a cluster mainly intended for backups to cephfs, 4 nodes, sata 
> disks and mostly 5400rpm. Because the cluster is doing nothing. I 
> decided to put vm's on them. I am running 15 vm's without problems on 
> the hdd pool. Going to move more to them. One of them is an macos 
> machine, I did once a fio test in it and gave me 917 iops at 4k random 
> reads. (technically not possible I would say, I have mostly default 
> configurations in libvirt)

Well, that is promising.

I did some measurements of the raw disk performance, I get about
30MB/sec according to `hdparm`, so whilst this isn't going to set the
world on fire, it's "decent" for my needs.

The only thing I can think of is the fact that `hdparm` does a
sequential read, whereas BlueStore operation would be more "random", so
seek times come into play.

I've now migrated two of my nodes to FileStore/XFS, with the journal
on-disk (it won't let me move it to the SSD like I did last time oddly
enough), and I'm seeing less I/O issues now although things are still
slow (3 nodes are still on BlueStore).

I think the fact that my nodes have plenty of RAM between them (>8GB,
one with 32GB) helps here.

The BlueStore settings are at their defaults, which means it should be
tuning the cache size used for BlueStore … maybe this isn't working as
it should on a cluster as small as this.

>  >
>  >> What you also can try is for io intensive vm's add an ssd pool?
>  >
>  >How well does that work in a cluster with 0 SSD-based OSDs?
>  >
>  >For 3 of the nodes, the cases I'm using for the servers can fit two 
> 2.5"
>  >drives.  I have one 120GB SSD for the OS, that leaves one space spare 
> for the OSD.  
> 
> 
> I think this could be your bottle neck, I have 31 drives, so the load is 
> spread across 31 (hopefully). If you have only 3 drives you have 
> 3x60iops to share amongst your vms. 
> I am getting the impression that ceph development is not really 
> interested in setups quite different from the advised standards. I once 
> made an attempt to get things better working for 1Gb adapters[0].

Yeah, unfortunately I'll never be able to cram 31 drives into this
cluster.  I am considering how I might add more, and right now the
immediate thought is to use m.2 SATA SSDs in USB 3 cases.

This gives me something a little bigger than a thumb-drive that is
bus-powered and external to the case, so I don't have the thermal and
space issues of mounting a HDD in there: they're small and light-weight
so they can just dangle from the supplied USB3 cable.

I'll have to do some research though on how mixing SSDs and HDDs would
work.  I need more space than SSDs alone can provide in a cost-effective
manner so going SSD only just isn't an option here, but if I can put
them into the same pool with the HDDs and have them act as a "cache" for
the more commonly read/written objects, that could help.

In this topology though, I my only be using 256GB or 512GB SSDs, so much
less storage on SSDs than the HDDs which likely won't work that well for
tiering (https://ceph.com/planet/ceph-hybrid-storage-tiers/).  So it'll
need some planning and home-work. :-)

FileStore/XFS looks to be improving the situation just a little, so if I
have to hold back on that for a bit, that's fine.  It'll give me time to
work on the next step.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] which tool to use for benchmarking rgw s3, yscb or cosbench

2019-07-22 Thread Lars Marowsky-Bree
On 2019-07-21T23:51:41, Wei Zhao  wrote:

> Hi:
>   I found cosbench is a very convenient tool for benchmaring rgw. But
> when I read papers ,  I found YCSB tool,
> https://github.com/brianfrankcooper/YCSB/tree/master/s3  . It seems
> that this is used for test cloud service , and seems a right tool for
> our service . Has  anyone tried this tool ?How is it  compared to
> cosbench ?

Depending on what you want to test/benchmark, there's also a (somewhat
simple) S3/Swift/DAV backend in the fio tool.

While that only implements somewhat straightforward GET/PUT/DELETE
operations for IO, it gives you a lot of control over the benchmark
parameters via the fio tool itself, and is very low overhead.

Another benefit is that it allows you to more or less directly compare
results across all protocols, since fio supports all the ways of
accessing Ceph (file, block, object, librados, kRBD, iSCSI, librbd
...).


Regards,
Lars

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah, HRB 21284 (AG 
Nürnberg)
"Architects should open possibilities and not determine everything." (Ueli 
Zbinden)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Vitaliy Filippov

5MB/s in what mode?

For linear writes, that definitely means some kind of misconfiguration.  
For random writes... there's a handbrake in Bluestore which makes random  
writes run at half speed in HDD-only setups :)  
https://github.com/ceph/ceph/pull/26909


And if you push that handbrake down you actually get better random writes  
on HDDs with bluestore, too.


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Marc Roos


 >> Reverting back to filestore is quite a lot of work and time again. 
 >> Maybe see first if with some tuning of the vms you can get better 
results?
 >
 >None of the VMs are particularly disk-intensive.  There's two users 
accessing the system over a WiFi network for email, and some HTTP/SMTP 
traffic coming in via an ADSL2 Internet connection.
 >
 >If Bluestore can't manage this, then I'd consider it totally worthless 
in any enterprise installation -- so clearly something is wrong.


I have a cluster mainly intended for backups to cephfs, 4 nodes, sata 
disks and mostly 5400rpm. Because the cluster is doing nothing. I 
decided to put vm's on them. I am running 15 vm's without problems on 
the hdd pool. Going to move more to them. One of them is an macos 
machine, I did once a fio test in it and gave me 917 iops at 4k random 
reads. (technically not possible I would say, I have mostly default 
configurations in libvirt)


 >
 >> What you also can try is for io intensive vm's add an ssd pool?
 >
 >How well does that work in a cluster with 0 SSD-based OSDs?
 >
 >For 3 of the nodes, the cases I'm using for the servers can fit two 
2.5"
 >drives.  I have one 120GB SSD for the OS, that leaves one space spare 
for the OSD.  


I think this could be your bottle neck, I have 31 drives, so the load is 
spread across 31 (hopefully). If you have only 3 drives you have 
3x60iops to share amongst your vms. 
I am getting the impression that ceph development is not really 
interested in setups quite different from the advised standards. I once 
made an attempt to get things better working for 1Gb adapters[0].

 >
 >I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs 
for the OS and like the other nodes have a single 2.5" drive bay.
 >
 >This is being done as a hobby and a learning exercise I might add -- 
so while I have spent a lot of money on this, the funds I have to throw 
at this are not infinite.


Same here ;) 


 >
 >> I moved
 >> some exchange servers on them. Tuned down the logging, because that 
is 
 >> writing constantly to disk.
 >> With such setup you are at least secured for the future.
 >
 >The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a 
few OpenBSD VMs for things like routers between virtual networks.
 >

[0] https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how to debug slow requests

2019-07-22 Thread Maximilien Cuony

Hello,

Your issue look like mine - I had op stuck with the same status: check 
"Random slow requests without any load" on this month list archive.


Bests,

Le 7/20/19 à 6:06 PM, Wei Zhao a écrit :

Hi ceph users:
I was doing  write benchmark, and found some io will be blocked for a
very long time. The following log is one op , it seems to wait for
replica to finish. My ceph version is 12.2.4, and the pool is 3+2 EC .
Does anyone give me some adives about how I sould debug next ?

{
 "ops": [
 {
 "description": "osd_op(client.17985.0:670679 39.18
39:1a63fc5c:::benchmark_data_SH-IDC1-10-5-37-174_2917453_object670678:head
[set-alloc-hint object_size 1048576 write_size 1048576,write
0~1048576] snapc 0=[] ondisk+write+known_if_redirected e1135)",
 "initiated_at": "2019-07-20 23:13:18.725466",
 "age": 329.248875,
 "duration": 329.248901,
 "type_data": {
 "flag_point": "waiting for sub ops",
 "client_info": {
 "client": "client.17985",
 "client_addr": "10.5.137.174:0/1544466091",
 "tid": 670679
 },
 "events": [
 {
 "time": "2019-07-20 23:13:18.725466",
 "event": "initiated"
 },
 {
 "time": "2019-07-20 23:13:18.726585",
 "event": "queued_for_pg"
 },
 {
 "time": "2019-07-20 23:13:18.726606",
 "event": "reached_pg"
 },
 {
 "time": "2019-07-20 23:13:18.726752",
 "event": "started"
 },
 {
 "time": "2019-07-20 23:13:18.726842",
 "event": "waiting for subops from 4"
 },
 {
 "time": "2019-07-20 23:13:18.743134",
 "event": "op_commit"
 },
 {
 "time": "2019-07-20 23:13:18.743137",
 "event": "op_applied"
 }
 ]
 }
 },
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Maximilien Cuony

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com