On Fri, Jun 9, 2017 at 2:21 AM, Dan van der Ster wrote:
> Hi Bryan,
>
> On Fri, Jun 9, 2017 at 1:55 AM, Bryan Stillwell
> wrote:
>> This has come up quite a few times before, but since I was only working with
>> RBD before I didn't pay too close attention to the conversation. I'm
>> looking
>>
When I send the a RGW request with hostname (with port that is not 80), I
am seeing "SignatureDoesNotMatch" error.
GET / HTTP/1.1
Host: cephrgw0002s2mdw1.sendgrid.net:50680
User-Agent: Minio (linux; amd64) minio-go/2.0.4 mc/2017-04-03T18:35:01Z
Authorization: AWS **REDACTED**:**REDACTED**
Si
Well I did bad I just don't know how bad yet. Before we get into it my
critical data is backed up to CrashPlan. I'd rather not lose all my
archive data. Losing some of the data is ok.
I added a bunch of disks to my ceph cluster so I turned off the cluster and
dd'd the raw disks around so that t
Hi list,
A few days ago we had some problems with our ceph cluster, and now we have some
OSDs crashing on start with messages like this right before crashing:
2017-06-09 15:35:02.226430 7fb46d9e4700 -1 log_channel(cluster) log [ERR] :
trim_object Snap 4aae0 not in clones
I can start those OSDs
Thanks David for sharing your experience, appreciate it.
--
Deepak
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Friday, June 09, 2017 5:38 AM
To: Deepak Naidu; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] OSD node type/count mixes in the cluster
I ran a cluster with 2 generati
On Fri, 9 Jun 2017, Dan van der Ster wrote:
> On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni wrote:
> > On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
> > wrote:
> >> Similar to Dan's situation we utilize the --cluster name concept for our
> >> operations. Primarily for "datamover" nodes which do
On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni wrote:
> On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
> wrote:
>> Similar to Dan's situation we utilize the --cluster name concept for our
>> operations. Primarily for "datamover" nodes which do incremental rbd
>> import/export between distinct clus
On Fri, 9 Jun 2017, Erik McCormick wrote:
> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote:
> > On Thu, 8 Jun 2017, Sage Weil wrote:
> >> Questions:
> >>
> >> - Does anybody on the list use a non-default cluster name?
> >> - If so, do you have a reason not to switch back to 'ceph'?
> >
> > It
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote:
> On Thu, 8 Jun 2017, Sage Weil wrote:
>> Questions:
>>
>> - Does anybody on the list use a non-default cluster name?
>> - If so, do you have a reason not to switch back to 'ceph'?
>
> It sounds like the answer is "yes," but not for daemons. Seve
On Thu, 8 Jun 2017, Sage Weil wrote:
> Questions:
>
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
It sounds like the answer is "yes," but not for daemons. Several users use
it on the client side to connect to multiple
On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
wrote:
> Similar to Dan's situation we utilize the --cluster name concept for our
> operations. Primarily for "datamover" nodes which do incremental rbd
> import/export between distinct clusters. This is entirely coordinated by
> utilizing the --clust
Hi,
I'm trying to reshard a rather large bucket (+13M objects) as per the
Red Hat documentation
(https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gateway_guide_for_ubuntu/administration_cli#resharding-bucket-index)
to be able to delete it, the process starts and runs
Similar to Dan's situation we utilize the --cluster name concept for our
operations. Primarily for "datamover" nodes which do incremental rbd
import/export between distinct clusters. This is entirely coordinated by
utilizing the --cluster option throughout.
The way we set it up is that all cluster
I ran a cluster with 2 generations of the same vendor hardware. 24 osd
supermicro and 32 osd supermicro (with faster/more RAM and CPU cores). The
cluster itself ran decently well, but the load differences was drastic
between the 2 types of nodes. It required me to run the cluster with 2
separate c
On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil wrote:
> On Thu, 8 Jun 2017, Bassam Tabbara wrote:
>> Thanks Sage.
>>
>> > At CDM yesterday we talked about removing the ability to name your ceph
>> > clusters.
>>
>> Just to be clear, it would still be possible to run multiple ceph
>> clusters on the sam
unsubscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 06/09/2017 06:41 AM, Benjeman Meekhof wrote:
> Hi Sage,
>
> We did at one time run multiple clusters on our OSD nodes and RGW
> nodes (with Jewel). We accomplished this by putting code in our
> puppet-ceph module that would create additional systemd units with
> appropriate CLUSTER=name enviro
Hi Bryan,
On Fri, Jun 9, 2017 at 1:55 AM, Bryan Stillwell wrote:
> This has come up quite a few times before, but since I was only working with
> RBD before I didn't pay too close attention to the conversation. I'm
> looking
> for the best way to handle existing clusters that have buckets with a
Hello,
Brad Hubbard wrote:
: I can reproduce this.
[...]
: That's here where you will notice it is returning EBUSY which is error
: code 16, "Device or resource busy".
:
:
https://github.com/badone/ceph/blob/wip-ceph_test_admin_socket_output/src/cls/lock/cls_lock.cc#L189
:
: In order t
19 matches
Mail list logo