Re: [ceph-users] ceph + vmware

2016-07-15 Thread Jake Young
I had some odd issues like that due to MTU mismatch. Keep in mind that the vSwitch and vmkernel port have independent MTU settings. Verify you can ping with large size packets without fragmentation between your host and iscsi target. If that's not it, you can try to disable VAAI options to see i

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Oliver Dzombic
Hi, i am currently trying out the stuff. My tgt config: # cat tgtd.conf # The default config file include /etc/tgt/targets.conf # Config files from other packages etc. include /etc/tgt/conf.d/*.conf nr_iothreads=128 - # cat iqn.2016-07.tgt.esxi-test.conf initiator-address ALL scsi_

[ceph-users] Ceph noob - getting error when I try to "ceph-deploy osd activate" on a node

2016-07-15 Thread Will Dennis
Hi all, Background: Completely new to Ceph; trying it out on three VMs I have, following install instructions found at http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ I am at the point in the docs where it says to run 'ceph-

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-15 Thread Vaibhav Bhembre
I enabled rbd_tracing on HV and restarted the guest as to pick the new configuration up. The change in value of *rbd_tracing* was confirmed from the admin socket. I am still unable to see any trace. lsof -p does not show *librbd_tp.so* loaded despite multiple restarts. Only *librbd.so* seems

Re: [ceph-users] cephfs-journal-tool lead to data missing and show up

2016-07-15 Thread Gregory Farnum
On Thu, Jul 14, 2016 at 1:42 AM, txm wrote: > I am a user of cephfs. > > Recently i met a problem by using the cephfs-journal-tool. > > There were some strange things happened below. > > 1.After use the cephfs-journal-tool and cephfs-table-tool(i came up with the > "negative object nums” issues,

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-15 Thread Jason Dillaman
There appears to be a hole in the documentation. You know have to set a configuration option to enable tracing: rbd_tracing = true This will causes librbd.so to dynamically load the tracing module librbd_tp.so (which has linkage to LTTng-UST). On Fri, Jul 15, 2016 at 1:47 PM, Vaibhav Bhembre w

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-15 Thread Vaibhav Bhembre
I followed the steps mentioned in [1] but somehow I am unable to see any traces to continue with its step 2. There are no errors seen when performing operations mentioned in step 1. In my setup I am running lttng commands on the HV where my VM has the RBD device attached. My lttng version is a

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Oliver Dzombic > Sent: 15 July 2016 08:35 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] ceph + vmware > > Hi Nick, > > yeah i understand the point and message, i wont do it :-)

[ceph-users] multitenant ceph (RBD)

2016-07-15 Thread George Shuklin
I looked to ceph multitenancy, and found almost none: caps mon 'allow r', osd 'allow rwx pool some_pool' give tenant avility to see pool list, osd tree, pg_dump, even see list of objects in other pools (via rados ls command). If I want to give tenant specified RBD for r/w (for his root bareme

Re: [ceph-users] setting crushmap while creating pool fails

2016-07-15 Thread Shinobu Kinjo
Thank you for that report. Sorry for that -; shinobu On Fri, Jul 15, 2016 at 4:47 PM, Oliver Dzombic wrote: > Hi Shinobu, > > > osd_pool_default_crush_replicated_ruleset = 2 > > Thats already set, and ignored. > > If your crushmap does not start with ruleset id 0 you will see this > missbehavi

[ceph-users] Antw: Re: SSD Journal

2016-07-15 Thread Steffen Weißgerber
>>> Christian Balzer schrieb am Donnerstag, 14. Juli 2016 um 17:06: > Hello, > > On Thu, 14 Jul 2016 13:37:54 +0200 Steffen Weißgerber wrote: > >> >> >> >>> Christian Balzer schrieb am Donnerstag, 14. Juli 2016 um >> 05:05: >> >> Hello, >> >> > Hello, >> > >> > On Wed, 13 Jul 2016 09:34

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Wido den Hollander
> Op 15 juli 2016 om 10:48 schreef Mart van Santen : > > > > Hi Wido, > > Thank you, we are currently in the same process so this information is > very usefull. Can you share why you upgraded from hammer directly to > jewel, is there a reason to skip infernalis? So, I wonder why you didn't > d

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Christian Balzer
Hello, On Fri, 15 Jul 2016 10:48:40 +0200 Mart van Santen wrote: > > Hi Wido, > > Thank you, we are currently in the same process so this information is > very usefull. Can you share why you upgraded from hammer directly to > jewel, is there a reason to skip infernalis? So, I wonder why you did

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Mykola Dvornik
I would also advice people to mind the SELinux if it is enabled on the OSD's nodes. The re-labeling should be done as the part of the upgrade and this is rather time consuming process. -Original Message- From: Mart van Santen To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Lessons

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Sean Redmond
Hi Matt, I have too followed the upgrade from hammer to jewel, I think it is pretty accepted to upgrade between LTS releases (H>J) skipping the 'stable' releases (I) in the middle. Thanks On Fri, Jul 15, 2016 at 9:48 AM, Mart van Santen wrote: > > Hi Wido, > > Thank you, we are currently in th

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Mart van Santen
Hi Wido, Thank you, we are currently in the same process so this information is very usefull. Can you share why you upgraded from hammer directly to jewel, is there a reason to skip infernalis? So, I wonder why you didn't do a hammer->infernalis->jewel upgrade, as that seems the logical path for

Re: [ceph-users] Slow requet on node reboot

2016-07-15 Thread Luis Ramirez
Hi Chris, Yes, all pools have size=3 and min_size=2. The clients are only RBD. I did a shutdown to make a firmware upgrade. Kr. Luis On 15/07/16 09:05, Christian Balzer wrote: Hello, On Fri, 15 Jul 2016 00:28:37 +0200 Luis Ramirez wrote: Hi, I've a cluster with 3 MON nodes and 5 OSD

Re: [ceph-users] setting crushmap while creating pool fails

2016-07-15 Thread Oliver Dzombic
Hi Shinobu, > osd_pool_default_crush_replicated_ruleset = 2 Thats already set, and ignored. If your crushmap does not start with ruleset id 0 you will see this missbehaviour. Also your mon servers will crashing. See http://tracker.ceph.com/issues/16653 -- Mit freundlichen Gruessen / Best reg

Re: [ceph-users] New to Ceph - osd autostart problem

2016-07-15 Thread Oliver Dzombic
Hi, Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5 First sector: 256 (at 1024.0 KiB) Last sector: 976754640 (at 3.6 TiB) Partition size: 976754385 sectors (3.6 TiB) Attribute flags: Partition name: 'c

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Oliver Dzombic
Hi Nick, yeah i understand the point and message, i wont do it :-) I just asked me recently how do i test if cache is enabled or not ? What i found requires a client to be connected to an rbd device. But we dont have that. Is there any way to ask ceph server if cache is enabled or not ? Its dis

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Oliver Dzombic > Sent: 12 July 2016 20:59 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] ceph + vmware > > Hi Jack, > > thank you! > > What has reliability to do with rbd_cache

Re: [ceph-users] Slow requet on node reboot

2016-07-15 Thread Christian Balzer
Hello, On Fri, 15 Jul 2016 00:28:37 +0200 Luis Ramirez wrote: > Hi, > > I've a cluster with 3 MON nodes and 5 OSD nodes. If i make a reboot > of 1 of the osd nodes i get slow request waiting for active. > > 2016-07-14 19:39:07.996942 osd.33 10.255.128.32:6824/7404 888 : cluster > [WRN]