Re: [ceph-users] Living with huge bucket sizes

2017-06-09 Thread Yehuda Sadeh-Weinraub
On Fri, Jun 9, 2017 at 2:21 AM, Dan van der Ster wrote: > Hi Bryan, > > On Fri, Jun 9, 2017 at 1:55 AM, Bryan Stillwell > wrote: >> This has come up quite a few times before, but since I was only working with >> RBD before I didn't pay too close attention to the conversation. I'm >> looking >>

[ceph-users] RGW: Auth error with hostname instead of IP

2017-06-09 Thread Eric Choi
When I send the a RGW request with hostname (with port that is not 80), I am seeing "SignatureDoesNotMatch" error. GET / HTTP/1.1 Host: cephrgw0002s2mdw1.sendgrid.net:50680 User-Agent: Minio (linux; amd64) minio-go/2.0.4 mc/2017-04-03T18:35:01Z Authorization: AWS **REDACTED**:**REDACTED** Si

[ceph-users] disk mishap + bad disk and xfs corruption = stuck PG's

2017-06-09 Thread Mazzystr
Well I did bad I just don't know how bad yet. Before we get into it my critical data is backed up to CrashPlan. I'd rather not lose all my archive data. Losing some of the data is ok. I added a bunch of disks to my ceph cluster so I turned off the cluster and dd'd the raw disks around so that t

[ceph-users] OSD crash (hammer): osd/ReplicatedPG.cc: 7477: FAILED assert(repop_queue.front() == repop)

2017-06-09 Thread Ricardo J. Barberis
Hi list, A few days ago we had some problems with our ceph cluster, and now we have some OSDs crashing on start with messages like this right before crashing: 2017-06-09 15:35:02.226430 7fb46d9e4700 -1 log_channel(cluster) log [ERR] : trim_object Snap 4aae0 not in clones I can start those OSDs

Re: [ceph-users] OSD node type/count mixes in the cluster

2017-06-09 Thread Deepak Naidu
Thanks David for sharing your experience, appreciate it. -- Deepak From: David Turner [mailto:drakonst...@gmail.com] Sent: Friday, June 09, 2017 5:38 AM To: Deepak Naidu; ceph-users@lists.ceph.com Subject: Re: [ceph-users] OSD node type/count mixes in the cluster I ran a cluster with 2 generati

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Sage Weil
On Fri, 9 Jun 2017, Dan van der Ster wrote: > On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni wrote: > > On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham > > wrote: > >> Similar to Dan's situation we utilize the --cluster name concept for our > >> operations. Primarily for "datamover" nodes which do

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Dan van der Ster
On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni wrote: > On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham > wrote: >> Similar to Dan's situation we utilize the --cluster name concept for our >> operations. Primarily for "datamover" nodes which do incremental rbd >> import/export between distinct clus

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Sage Weil
On Fri, 9 Jun 2017, Erik McCormick wrote: > On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote: > > On Thu, 8 Jun 2017, Sage Weil wrote: > >> Questions: > >> > >> - Does anybody on the list use a non-default cluster name? > >> - If so, do you have a reason not to switch back to 'ceph'? > > > > It

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote: > On Thu, 8 Jun 2017, Sage Weil wrote: >> Questions: >> >> - Does anybody on the list use a non-default cluster name? >> - If so, do you have a reason not to switch back to 'ceph'? > > It sounds like the answer is "yes," but not for daemons. Seve

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Sage Weil
On Thu, 8 Jun 2017, Sage Weil wrote: > Questions: > > - Does anybody on the list use a non-default cluster name? > - If so, do you have a reason not to switch back to 'ceph'? It sounds like the answer is "yes," but not for daemons. Several users use it on the client side to connect to multiple

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Vasu Kulkarni
On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham wrote: > Similar to Dan's situation we utilize the --cluster name concept for our > operations. Primarily for "datamover" nodes which do incremental rbd > import/export between distinct clusters. This is entirely coordinated by > utilizing the --clust

[ceph-users] RGW radosgw-admin reshard bucket ends with ERROR: bi_list(): (4) Interrupted system call

2017-06-09 Thread Andreas Calminder
Hi, I'm trying to reshard a rather large bucket (+13M objects) as per the Red Hat documentation (https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gateway_guide_for_ubuntu/administration_cli#resharding-bucket-index) to be able to delete it, the process starts and runs

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Wes Dillingham
Similar to Dan's situation we utilize the --cluster name concept for our operations. Primarily for "datamover" nodes which do incremental rbd import/export between distinct clusters. This is entirely coordinated by utilizing the --cluster option throughout. The way we set it up is that all cluster

Re: [ceph-users] OSD node type/count mixes in the cluster

2017-06-09 Thread David Turner
I ran a cluster with 2 generations of the same vendor hardware. 24 osd supermicro and 32 osd supermicro (with faster/more RAM and CPU cores). The cluster itself ran decently well, but the load differences was drastic between the 2 types of nodes. It required me to run the cluster with 2 separate c

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Alfredo Deza
On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil wrote: > On Thu, 8 Jun 2017, Bassam Tabbara wrote: >> Thanks Sage. >> >> > At CDM yesterday we talked about removing the ability to name your ceph >> > clusters. >> >> Just to be clear, it would still be possible to run multiple ceph >> clusters on the sam

[ceph-users] (no subject)

2017-06-09 Thread Steele, Tim
unsubscribe ceph-users ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Tim Serong
On 06/09/2017 06:41 AM, Benjeman Meekhof wrote: > Hi Sage, > > We did at one time run multiple clusters on our OSD nodes and RGW > nodes (with Jewel). We accomplished this by putting code in our > puppet-ceph module that would create additional systemd units with > appropriate CLUSTER=name enviro

Re: [ceph-users] Living with huge bucket sizes

2017-06-09 Thread Dan van der Ster
Hi Bryan, On Fri, Jun 9, 2017 at 1:55 AM, Bryan Stillwell wrote: > This has come up quite a few times before, but since I was only working with > RBD before I didn't pay too close attention to the conversation. I'm > looking > for the best way to handle existing clusters that have buckets with a

Re: [ceph-users] rados rm: device or resource busy

2017-06-09 Thread Jan Kasprzak
Hello, Brad Hubbard wrote: : I can reproduce this. [...] : That's here where you will notice it is returning EBUSY which is error : code 16, "Device or resource busy". : : https://github.com/badone/ceph/blob/wip-ceph_test_admin_socket_output/src/cls/lock/cls_lock.cc#L189 : : In order t