[ceph-users] 答复: CephX Authentication fails when only disable "auth_cluster_required"

2017-03-31 Thread 许雪寒
By the way, we are using hammer version, 0.94.5. -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年4月1日 13:13 收件人: ceph-users@lists.ceph.com 主题: [ceph-users] CephX Authentication fails when only disable "auth_cluster_required" Hi, everyone. According to

[ceph-users] 答复: rbd expord-diff aren't counting AioTruncate op correctly

2017-03-31 Thread 许雪寒
By the way, we are using hammer version, 0.94.5. -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年4月1日 10:37 收件人: ceph-users@lists.ceph.com 主题: [ceph-users] rbd expord-diff aren't counting AioTruncate op correctly Hi, everyone. Recently, in our test, we

[ceph-users] CephX Authentication fails when only disable "auth_cluster_required"

2017-03-31 Thread 许雪寒
Hi, everyone. According to the documentation, “auth_cluster_required” means that “the Ceph Storage Cluster daemons (i.e., ceph-mon, ceph-osd, and ceph-mds) must authenticate with each other”. So, I guess if I only need to verify the client, then "auth_cluster_required" doesn't need to be enable

Re: [ceph-users] Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration."

2017-03-31 Thread Ben Hines
I'm also trying to use lifecycles (via boto3) but i'm getting permission denied trying to create the lifecycle. I'm bucket owner with full_control and WRITE_ACP for good measure. Any ideas? This is debug ms=20 debug radosgw=20 2017-03-31 21:28:18.382217 7f50d0010700 2 req 8:0.000693:s3:PUT /be

[ceph-users] Pool available capacity estimates, made better

2017-03-31 Thread Xavier Villaneau
Hello, As part of the python-crush project, I am working on a feature to calculate the available usable space in the pools of a cluster. The idea is to make an accurate and conservative estimate that takes into account the exact PG mappings as well as any other information that could help quantify

[ceph-users] Strange crush / ceph-deploy issue

2017-03-31 Thread Reed Dier
Trying to add a batch of OSD’s to my cluster, (Jewel 10.2.6, Ubuntu 16.04) 2 new nodes (ceph01,ceph02), 10 OSD’s per node. I am trying to steer the OSD’s into a different root pool with crush location set in ceph.conf with > [osd.34] > crush_location = "host=ceph01 rack=ssd.rack2 root=ssd" > >

[ceph-users] rbd expord-diff aren't counting AioTruncate op correctly

2017-03-31 Thread 许雪寒
Hi, everyone. Recently, in our test, we found that there are VM images, that we exported from the original cluster and imported to another cluster, whose images on those two clusters are not the same. The details of test is as follows: at first, we fully exported the VM's images from the origi

[ceph-users] Slow CephFS writes after Jewel upgrade from Infernalis

2017-03-31 Thread Richard Hesse
Hi, we recently upgraded one of our Ceph clusters from Infernalis to Jewel. The upgrade process went smoothly: Upgraded OSD's, restarted them in batches, waited for health OK, updated Mon and MDS, restarted, waited for heatlh OK, etc. We then set require_jewel_osds flag and upgraded our CephFS clie

[ceph-users] Problem upgrading Jewel from 10.2.3 to 10.2.6

2017-03-31 Thread Herbert Faleiros
Hi, when upgrading my cluster from 10.2.3 to 10.2.6 I've faced a major failure and I think it could(?) be a bug. My SO is Ubuntu (Xenial), Ceph packages are also from distro. My cluster have 3 monitors and 96 OSDs. First I stoped one mon, then upgrade SO packages, reboot, it came back on as expe

Re: [ceph-users] CephFS fuse client users stuck

2017-03-31 Thread Andras Pataki
f all clients and the MDS (at log level 20) to http://voms.simonsfoundation.org:50013/GTrbrMWDHb9F7CampXyYt5Ensdjg47w/ceph-20170331/ It essentially runs in a loop opening a file for read/write, reading from it and closing it. The read/write open is key, if open the file read-only, the problem doesn

Re: [ceph-users] Number of objects 'in' a snapshot ?

2017-03-31 Thread Gregory Farnum
And to address the other question, no, there is no per-snapshot accounting. On Fri, Mar 31, 2017 at 1:28 AM Frédéric Nass < frederic.n...@univ-lorraine.fr> wrote: > I just realized that Nick's post is only a few days old. Found the > tracker : http://tracker.ceph.com/issues/19241 > > Frederic. >

Re: [ceph-users] How to mount different ceph FS using ceph-fuse or kernel cephfs mount

2017-03-31 Thread Deepak Naidu
Thanks John, that did the trick for ceph-fuse. But the kernel mount using 4.9.15 kernel still hangs. I guess I might need newer kernel ? -- Deepak -Original Message- From: John Spray [mailto:jsp...@redhat.com] Sent: Friday, March 31, 2017 1:19 AM To: Deepak Naidu Cc: ceph-users Subject:

Re: [ceph-users] CephFS fuse client users stuck

2017-03-31 Thread John Spray
tem. > > The full test program is uploaded together with the verbose logs of all > clients and the MDS (at log level 20) to > http://voms.simonsfoundation.org:50013/GTrbrMWDHb9F7CampXyYt5Ensdjg47w/ceph-20170331/ > It essentially runs in a loop opening a file for read/write, reading f

Re: [ceph-users] FreeBSD port net/ceph-devel released

2017-03-31 Thread Willem Jan Withagen
On 31-3-2017 17:32, Wido den Hollander wrote: > Hi Willem Jan, > >> Op 30 maart 2017 om 13:56 schreef Willem Jan Withagen >> : >> >> >> Hi, >> >> I'm pleased to announce that my efforts to port to FreeBSD have >> resulted in a ceph-devel port commit in the ports tree. >> >> https://www.freshpo

Re: [ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-03-31 Thread Wido den Hollander
> Op 30 maart 2017 om 20:13 schreef Richard Hesse : > > > Thanks for the reply Wido! How do you handle IPv6 routes and routing with > IPv6 on public and cluster networks? You mentioned that your cluster > network is routed, so they will need routes to reach the other racks. But > you can't have

Re: [ceph-users] radosgw leaking objects

2017-03-31 Thread Yehuda Sadeh-Weinraub
On Fri, Mar 31, 2017 at 2:08 AM, Marius Vaitiekunas wrote: > > > On Fri, Mar 31, 2017 at 11:15 AM, Luis Periquito > wrote: >> >> But wasn't that what orphans finish was supposed to do? >> > > orphans finish only removes search results from a log pool. > Right. The tool isn't removing objects (ye

Re: [ceph-users] FreeBSD port net/ceph-devel released

2017-03-31 Thread Wido den Hollander
Hi Willem Jan, > Op 30 maart 2017 om 13:56 schreef Willem Jan Withagen : > > > Hi, > > I'm pleased to announce that my efforts to port to FreeBSD have resulted > in a ceph-devel port commit in the ports tree. > > https://www.freshports.org/net/ceph-devel/ > Awesome work! I don't touch FreeBS

Re: [ceph-users] disk timeouts in libvirt/qemu VMs...

2017-03-31 Thread Jason Dillaman
The exclusive-lock feature should only require grabbing the lock on the very first IO, so if this is an issue that pops up after extended use, it's either most likely not related to exclusive-lock or perhaps you had a client<->OSD link hiccup. In the latter case, you will see a log message like "im

Re: [ceph-users] Client's read affinity

2017-03-31 Thread Jason Dillaman
Assuming you are asking about RBD-back VMs, it is not possible to localize the all reads to the VM image. You can, however, enable localization of the parent image since that is a read-only data set. To enable that feature, set "rbd localize parent reads = true" and populate the "crush location = h

Re: [ceph-users] Client's read affinity

2017-03-31 Thread Alejandro Comisario
any experiences ? On Wed, Mar 29, 2017 at 2:02 PM, Alejandro Comisario wrote: > Guys hi. > I have a Jewel Cluster divided into two racks which is configured on > the crush map. > I have clients (openstack compute nodes) that are closer from one rack > than to another. > > I would love to (if is p

Re: [ceph-users] CephFS fuse client users stuck

2017-03-31 Thread Andras Pataki
MDS (at log level 20) to http://voms.simonsfoundation.org:50013/GTrbrMWDHb9F7CampXyYt5Ensdjg47w/ceph-20170331/ It essentially runs in a loop opening a file for read/write, reading from it and closing it. The read/write open is key, if open the file read-only, the problem doesn't h

Re: [ceph-users] FSMAP Problem.

2017-03-31 Thread John Spray
On Fri, Mar 31, 2017 at 9:43 AM, Alexandre Blanca wrote: > Hi, > > After prepare and activate my OSDs I create my cephFS : > > ceph fs new cephfs1 metadata1 data1 > new fs with metadata pool 11 and data pool 10 > > ceph osd pool ls > data1 > metadata1 > > ceph fs ls > name: cephfs1, metadata pool:

Re: [ceph-users] radosgw leaking objects

2017-03-31 Thread Marius Vaitiekunas
On Fri, Mar 31, 2017 at 11:15 AM, Luis Periquito wrote: > But wasn't that what orphans finish was supposed to do? > > orphans finish only removes search results from a log pool. -- Marius Vaitiekūnas ___ ceph-users mailing list ceph-users@lists.ceph.

[ceph-users] FSMAP Problem.

2017-03-31 Thread Alexandre Blanca
Hi, After prepare and activate my OSDs I create my cephFS : ceph fs new cephfs1 metadata1 data1 new fs with metadata pool 11 and data pool 10 ceph osd pool ls data1 metadata1 ceph fs ls name: cephfs1, metadata pool: metadata1, data pools: [data1 ] ceph mds stat e65: 1/1/1 up {0=sfd-serv1=up:c

Re: [ceph-users] Number of objects 'in' a snapshot ?

2017-03-31 Thread Frédéric Nass
I just realized that Nick's post is only a few days old. Found the tracker : http://tracker.ceph.com/issues/19241 Frederic. Le 31/03/2017 à 10:12, Frédéric Nass a écrit : Hi, Can we get the number of objects in a pool snapshot ? (That is how much will be removed on snapshot removal) We're

Re: [ceph-users] How to mount different ceph FS using ceph-fuse or kernel cephfs mount

2017-03-31 Thread John Spray
Hmm, now that I look, the --client_mds_namespace in jewel only took an integer. So do "ceph fs dump", and use the number after the name of the filesystem you want (e.g. where mine says "Filesystem 'cephfs_a' (1)", I would use --client_mds_namespace=1). John On Thu, Mar 30, 2017 at 9:41 PM, Deepa

[ceph-users] Number of objects 'in' a snapshot ?

2017-03-31 Thread Frédéric Nass
Hi, Can we get the number of objects in a pool snapshot ? (That is how much will be removed on snapshot removal) We're facing terrible performance issues due to snapshot removal in Jewel. Nick warns about using "osd_snap_trim_sleep" in Jewel (https://www.mail-archive.com/ceph-users@lists.ce