Re: [ceph-users] Efficient storage of small objects / bulk erasure coding

2017-10-23 Thread Jiri Horky
Hi Greg, On 10/17/2017 11:49 PM, Gregory Farnum wrote: > On Tue, Oct 17, 2017 at 12:42 PM Jiri Horky > wrote: > > Hi list, > > we are thinking of building relatively big CEPH-based object > storage for > storage of our sample files - we have about 700M

Re: [ceph-users] Efficient storage of small objects / bulk erasure coding

2017-10-23 Thread Gregory Farnum
On Mon, Oct 23, 2017 at 9:37 AM Jiri Horky wrote: > Hi Greg, > > > On 10/17/2017 11:49 PM, Gregory Farnum wrote: > > On Tue, Oct 17, 2017 at 12:42 PM Jiri Horky wrote: > >> Hi list, >> >> we are thinking of building relatively big CEPH-based object storage for >> storage of our sample files - we

Re: [ceph-users] Problems with CORS

2017-10-23 Thread Rudenko Aleksandr
Thank you David for your suggestion. We add our domain(Origin) to zonegroup’s endpoints and hostnames: { "id": "default", "name": "default", "api_name": "", "is_master": "true", "endpoints": [ "https://console.{our_domain}.ru";, ], "hostnames": [ "https

[ceph-users] Drive write cache recommendations for Luminous/Bluestore

2017-10-23 Thread Hans van den Bogert
Hi All, For Jewel there is this page about drive cache: http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/#hard-drive-prep For Bluestore I can't find any documentation or discussions about drive write cache, while I can imagine that revisiting this subject might be ne

[ceph-users] librbd on CentOS7

2017-10-23 Thread Wolfgang Lendl
Hello, we're testing KVM on CentOS 7 as Ceph (luminous) client. CentOS 7 has a librbd package in its base repository with version 0.94.5 the question is (aside from feature support) if we should install a recent librbd from the ceph repositories (12.2.x) or stay with the default one. my main conc

[ceph-users] ceph index is not complete

2017-10-23 Thread vyyy杨雨阳
Hello, I found a bucket that some objects in this bucket can not list! Bucket stats shows there are 3182 objects, but swift list or s3 only shows 2028 objects, Listomapkeys also shows 2028 entries exclude multipart I have run radosgw-admin bucket check --fix --check-objects

Re: [ceph-users] librbd on CentOS7

2017-10-23 Thread Jason Dillaman
Feel free to update the CentOS client libraries as well. The base EL7 packages are updated on an as-needed basis and due to layered product dependencies, sometimes it takes a lot of push to get them to be updated. I'd suspect that the packages will be updated again at some point during the lifetime

[ceph-users] High osd cpu usage ( luminous )

2017-10-23 Thread Yair Magnezi
Hello Guys We have a fresh 'luminous' ( 12.2.0 ) (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc) ( installed using ceph-ansible ) cluster contains 6 * Intel server board S2600WTTR ( 96 osds and 3 mons ) We have 6 nodes ( Intel server board S2600WTTR ) , Mem - 64G , CPU -> I

Re: [ceph-users] Efficient storage of small objects / bulk erasure coding

2017-10-23 Thread John Spray
On Tue, Oct 17, 2017 at 9:42 PM, Jiri Horky wrote: > Hi list, > > we are thinking of building relatively big CEPH-based object storage for > storage of our sample files - we have about 700M files ranging from very > small (1-4KiB) files to pretty big ones (several GiB). Median of file > size is 64

Re: [ceph-users] Efficient storage of small objects / bulk erasure coding

2017-10-23 Thread Jiri Horky
Hi John, On 10/23/2017 02:59 PM, John Spray wrote: > On Tue, Oct 17, 2017 at 9:42 PM, Jiri Horky wrote: >> Hi list, >> >> we are thinking of building relatively big CEPH-based object storage for >> storage of our sample files - we have about 700M files ranging from very >> small (1-4KiB) files to

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-23 Thread Russell Glaue
The two newest machines have the LSI MegaRAID SAS-3 3008 [Fury]. The first one performs the best of the four. The second one is the problem host. The Non-RAID option just takes RAID configuration out of the picture so ceph can have direct access to the disk. We need that to have ceph's support of t

[ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Marco Baldini - H.S. Amiata
Hello I have a CEPH cluster with 3 nodes, each with 3 OSDs, running Proxmox, CEPH  versions: { "mon": { "ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267) luminous (stable)": 3 }, "mgr": { "ceph version 12.2.1 (1a629971a9bcaaae99e5539a3a43f800a297f267)

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Denes Dolhay
Hi, Maybe some routing issue? "CEPH has public and cluster network on 10.10.10.0/24" This means that the nodes have public and cluster network separately both on 10.10.10.0/24, or that you did not specify a separate cluster network? Please provide route table, ifconfig, ceph.conf Regards

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Marco Baldini - H.S. Amiata
Thanks for reply My ceph.conf: [global] auth client required = none auth cluster required = none auth service required = none bluestore_block_db_size = 64424509440 *cluster network = 10.10.10.0/24* fsid = 24d5d6bc-0

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Alwin Antreich
Hi Marco, On Mon, Oct 23, 2017 at 04:10:34PM +0200, Marco Baldini - H.S. Amiata wrote: > Thanks for reply > > My ceph.conf: > >[global] > auth client required = none > auth cluster required = none > auth service required = none > bluestore_bl

Re: [ceph-users] UID Restrictions

2017-10-23 Thread Keane Wolter
Hi Gregory, I did set the cephx caps for the client to: caps: [mds] allow r, allow rw uid=100026 path=/user, allow rw path=/project caps: [mon] allow r caps: [osd] allow rw pool=cephfs_osiris, allow rw pool=cephfs_users Keane On Fri, Oct 20, 2017 at 5:35 PM, Gregory Farnum wrote: > What did y

Re: [ceph-users] Retrieve progress of volume flattening using RBD python library

2017-10-23 Thread Xavier Trilla
Hi guys, No ideas about how to do that? Does anybody know where we could ask about librbd python library usage? Thanks! Xavier. De: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] En nombre de Xavier Trilla Enviado el: martes, 17 de octubre de 2017 11:55 Para: ceph-users@lists.ceph.com A

Re: [ceph-users] Speeding up garbage collection in RGW

2017-10-23 Thread David Turner
We recently deleted a bucket that was no longer needed that had 400TB of data in it to help as our cluster is getting quite full. That should free up about 30% of our cluster used space, but in the last week we haven't seen nearly a fraction of that free up yet. I left the cluster with this runni

Re: [ceph-users] Retrieve progress of volume flattening using RBD python library

2017-10-23 Thread Jason Dillaman
The current RBD python API does not expose callbacks from the wrapped C API so it is not currently possible to retrieve the flatten, remove, etc progress indications. Improvements to the API are always welcomed. On Mon, Oct 23, 2017 at 11:06 AM, Xavier Trilla wrote: > Hi guys, > > > > No ideas ab

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Denes Dolhay
Hi, So, you are running both the public and the cluster on the same network, this is supported, but in this case you do not have to specify any of the networks in the configuration. It is just a wild guess, but maybe this is the cause of your problem! Denes. On 10/23/2017 04:26 PM, Alwin

Re: [ceph-users] Looking for help with debugging cephfs snapshots

2017-10-23 Thread David Turner
purged_snaps is persistent indefinitely. If the list gets too large it abbreviates it a bit, but it can cause your osd-map to get a fair bit larger because it keeps track of them. On Sun, Oct 22, 2017 at 10:39 PM Eric Eastman wrote: > On Sun, Oct 22, 2017 at 8:05 PM, Yan, Zheng wrote: > >> On

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Marco Baldini - H.S. Amiata
Hello ceph-mon services do not restart in any node, yesterday I manually restarted ceph-mon and ceph-mgr on every node and since them they did not restart *pve-hs-2$ systemctl status ceph-mon@pve-hs-2.service* ceph-mon@pve-hs-2.service - Ceph cluster monitor daemon Loaded: loaded (/lib/sy

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Marco Baldini - H.S. Amiata
Hi I used the tool pveceph provided with Proxmox to initialize ceph, I can change but in that case should I put only public network or only cluster network in ceph.conf? Thanks Il 23/10/2017 17:33, Denes Dolhay ha scritto: Hi, So, you are running both the public and the cluster on the

Re: [ceph-users] Qs on caches, and cephfs

2017-10-23 Thread David Turner
Multiple cached tiers? 2 tiers to 1 pool or a cache tier to a cache tier? Neither are discussed or mentioned anywhere. At best it might work, but isn't tested for a new release. One cache to multiple pools? Same as above. The luminous docs for cache tiering was updated with "A Word of Caution" wh

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Denes Dolhay
Hi, I only have a virtual PoC cluster created by ceph-deploy, using only one network, same as you. I just checked, it's configuration does not contain either public nor cluster network. I guess when there is only one there is no point... Denes. On 10/23/2017 05:52 PM, Marco Baldini - H.S.

Re: [ceph-users] Continuous error: "libceph: monX session lost, hunting for new mon" on one host

2017-10-23 Thread Marco Baldini - H.S. Amiata
Hi Thanks for reply, but my servers have various networks so I think I have to tell ceph what network should use. Il 23/10/2017 18:10, Denes Dolhay ha scritto: Hi, I only have a virtual PoC cluster created by ceph-deploy, using only one network, same as you. I just checked, it's configu

Re: [ceph-users] Qs on caches, and cephfs

2017-10-23 Thread John Spray
On Mon, Oct 23, 2017 at 7:50 AM, Jeff wrote: > Hey everyone, > > Long time listener first time caller. > Thank you to everyone who works on Ceph, docs and code, I'm loving Ceph. > I've been playing with Ceph for awhile and have a few Qs. > > Ceph cache tiers, can you have multiple tiered caches? >

[ceph-users] Erasure code profile

2017-10-23 Thread Karun Josy
Hi, While creating a pool with erasure code profile k=10, m=4, I get PG status as "200 creating+incomplete" While creating pool with profile k=5, m=3 it works fine. Cluster has 8 OSDs with total 23 disks. Is there any requirements for setting the first profile ? Karun _

Re: [ceph-users] Erasure code profile

2017-10-23 Thread LOPEZ Jean-Charles
Hi, yes you need as many OSDs that k+m is equal to. In your example you need a minimum of 14 OSDs for each PG to become active+clean. Regards JC > On 23 Oct 2017, at 20:29, Karun Josy wrote: > > Hi, > > While creating a pool with erasure code profile k=10, m=4, I get PG status as > "200 crea

Re: [ceph-users] Erasure code profile

2017-10-23 Thread Ronny Aasen
On 23.10.2017 20:29, Karun Josy wrote: Hi, While creating a pool with erasure code profile k=10, m=4, I get PG status as "200 creating+incomplete" While creating pool with profile k=5, m=3 it works fine. Cluster has 8 OSDs with total 23 disks. Is there any requirements for setting the first

Re: [ceph-users] Erasure code profile

2017-10-23 Thread Jorge Pinilla López
I have one question, what can or can't do a cluster working on degraded mode? With K=10 + M = 4 if one of my OSDs node fails it will start working on degraded mode, but can I still do writes and reads from that pool? El 23/10/2017 a las 21:01, Ronny Aasen escribió: > On 23.10.2017 20:29, Karun J

Re: [ceph-users] Erasure code profile

2017-10-23 Thread Karun Josy
Thank you for the reply. There are 8 OSD nodes with 23 OSDs in total. (However, they are not distributed equally on all nodes) So it satisfies that criteria, right? Karun Josy On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles wrote: > Hi, > > yes you need as many OSDs that k+m is equal t

Re: [ceph-users] Erasure code profile

2017-10-23 Thread LOPEZ Jean-Charles
Hi, the default failure domain if not specified on the CLI at the moment you create your EC profile is set to HOST. So you need 14 OSDs spread across 14 different nodes by default. And you only have 8 different nodes. Regards JC > On 23 Oct 2017, at 21:13, Karun Josy wrote: > > Thank you for

Re: [ceph-users] Erasure code profile

2017-10-23 Thread David Turner
This can be changed to a failure domain of OSD in which case it could satisfy the criteria. The problem with a failure domain of OSD, is that all of your data could reside on a single host and you could lose access to your data after restarting a single host. On Mon, Oct 23, 2017 at 3:23 PM LOPEZ

Re: [ceph-users] Erasure code profile

2017-10-23 Thread Jorge Pinilla López
If you use a OSD failure domain, if a node goes down you can lose your data and the cluster wont be able to work. If you restart the OSD it might work, but you could even lose your data as your cluster can't rebuild itself. You can try to know where the CRUSH rule is going to set your data but I

Re: [ceph-users] Retrieve progress of volume flattening using RBD python library

2017-10-23 Thread Xavier Trilla
Hi Jason, Thanks for your reply. Ok, well, we’ll look into it then ;) Thanks, Xavier El 23 oct 2017, a las 17:23, Jason Dillaman mailto:jdill...@redhat.com>> escribió: The current RBD python API does not expose callbacks from the wrapped C API so it is not currently possible to retrieve the

Re: [ceph-users] Inconsistent PG won't repair

2017-10-23 Thread Richard Bade
What I'm thinking about trying is using the ceph-objectstore-tool to remove the offending clone metadata. From the help the syntax is this: ceph-objectstore-tool ... remove-clone-metadata i.e. something like for my object and expected clone from the log message ceph-objectstore-tool rbd_data.19cd

Re: [ceph-users] [Jewel] Crash Osd with void Hit_set_trim

2017-10-23 Thread Brad Hubbard
On Mon, Oct 23, 2017 at 4:51 PM, pascal.pu...@pci-conseil.net < pascal.pu...@pci-conseil.net> wrote: > Hello, > Le 23/10/2017 à 02:05, Brad Hubbard a écrit : > > 2017-10-22 17:32:56.031086 7f3acaff5700 1 osd.14 pg_epoch: 72024 > pg[37.1c( v 71593'41657 (60849'38594,71593'41657] local-les=72023 n=

Re: [ceph-users] [Jewel] Crash Osd with void Hit_set_trim

2017-10-23 Thread Brad Hubbard
On Tue, Oct 24, 2017 at 3:49 PM, Brad Hubbard wrote: > > > On Mon, Oct 23, 2017 at 4:51 PM, pascal.pu...@pci-conseil.net < > pascal.pu...@pci-conseil.net> wrote: > >> Hello, >> Le 23/10/2017 à 02:05, Brad Hubbard a écrit : >> >> 2017-10-22 17:32:56.031086 7f3acaff5700 1 osd.14 pg_epoch: 72024 >>

Re: [ceph-users] [Jewel] Crash Osd with void Hit_set_trim

2017-10-23 Thread pascal.pu...@pci-conseil.net
Hello, Le 24/10/2017 à 07:49, Brad Hubbard a écrit : On Mon, Oct 23, 2017 at 4:51 PM, pascal.pu...@pci-conseil.net > wrote: Hello, Le 23/10/2017 à 02:05, Brad Hubbard a écrit : 2017-10-22 17:32:56.03108