Hello,
I am using the latest AWS PHP SDK to create a bucket.
Every time I attempt to do this in the log I see:
2019-06-14 11:42:53.092 7fdff5459700 1 civetweb: 0x55c5450249d8: redacted - -
[14/Jun/2019:11:42:53 -0400] "PUT / HTTP/1.1" 405 405 - aws-sdk-php/3.100.3
GuzzleHttp/6.3.3 curl/7.29.0
Hello,
I was able to get Nautilus running on my cluster.
When I try to login to dashboard with the user I created if I enter the correct
credentials in the log I see:
2019-06-06 12:51:43.738 7f373ec9b700 1 mgr[dashboard]
[:::192.168.105.1:56110] [GET] [401] [0.002s] [271B]
/api/settings/
Hello,
I built a tiny test cluster with Luminous using the CentOS storage repos.
I saw that they now have a nautilus repo as well but I can't find much
information on upgrading from one to the other.
Does it make sense to continue using the CentOS storage repos or should I just
switch to the o
Is there any way to safely switch the yum repo I am using from the CentOS
Storage repo to the official ceph repo for RPMs or should I just rebuild it?
Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cg
ong? And Do you have any idea about how I should debug
next ?
On Wed, Jul 4, 2018 at 5:23 PM
mailto:respo...@ifastnet.com>> wrote:
Hi Drew,
Try to increase debugging with
debug ms = 1
debug rgw = 20
Regards
Kev
- Original Message -
From: "Drew Weaver" mailto:drew.wea
An application is having general failures writing to a test cluster we have
setup.
2018-07-02 23:13:26.128282 7fe00b560700 0 WARNING: set_req_state_err err_no=5
resorting to 500
2018-07-02 23:13:26.128460 7fe00b560700 1 == req done req=0x7fe00b55a110
op status=-5 http_status=500 ==
20
-06-21 14:54 GMT+02:00 Drew Weaver
mailto:drew.wea...@thenap.com>>:
Does anyone know if it is possible to designate an OSD as a spare so that if a
disk dies in a host no administrative action needs to be immediately taken to
remedy the situation?
Thanks,
Does anyone know if it is possible to designate an OSD as a spare so that if a
disk dies in a host no administrative action needs to be immediately taken to
remedy the situation?
Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Please note that the DC S3700/3710 was discontinued/EOL’d so it may not be a
great idea to use those in new deployments as supply will eventually dry up and
Intel apparently has no plans to offer a DC S4700 with similar endurance.
From: ceph-users On Behalf Of Nghia Than
Sent: Thursday, March
19446/16764 objects degraded (115.999%) <-- I noticed that number seems odd
I don't think that's normal!
40795/16764 objects degraded (243.349%) <-- Now I’m really concerned.
I'd recommend providing more info, Ceph version, bluestore or filestore,
crushmap etc.
Hi, thanks for the reply.
12.2
Howdy,
I replaced a disk today because it was marked as Predicted failure. These were
the steps I took
ceph osd out osd17
ceph -w #waited for it to get done
systemctl stop ceph-osd@osd17
ceph osd purge osd17 --yes-i-really-mean-it
umount /var/lib/ceph/osd/ceph-osdX
I noticed that after I ran th
s you are
using a Hypervisor like ESXi which only works with ISCSI/NFS correct?
Thanks,
-Drew
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Drew
Weaver
Sent: Tuesday, July 15, 2014 9:03 AM
To: 'ceph-users@lists.ceph.com'
Subject: [ceph-users] Working ISC
Does anyone have a guide or re-producible method of getting multipath ISCSI
working infront of ceph? Even if it just means having two front-end ISCSI
targets each with access to the same underlying Ceph volume?
This seems like a super popular topic.
Thanks,
-Drew
__
Hi there,
I'm sure that the Ceph community was somewhat excited when Seagate released
their enterprise 6TB SAS/SATA hard drives recently, previously the only other
6TB drives which were available for enterprises were the HGST helium ones which
are nearly impossible to find unless you are buying
Does anyone know of any tools that help you visually monitor a ceph cluster
automatically?
Something that is host, osd, mon aware and shows various status of components,
etc?
Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
Hi there,
I am getting to the Ceph party a little late but I am trying to find out if any
work has already been done on trying to automate the provisioning lifecycle of
users, etc in radosgw?
A few days ago I started trying to write a PHP script that created a user using
the adminops API and I
You can actually just install it using the Ubuntu packages. I did it yesterday
on Trusty.
Thanks,
-Drew
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Travis Rhoden
Sent: Friday, April 25, 2014 3:06 PM
To: ceph-users
Subject: [ceph-users] packag
Greetings, I got a ceph test cluster setup this week and I thought it would be
neat if I could write a php script that let me start working with the adminops
API.
I did some research to figure out how to correctly 'authorize' in the AWS
fashion and wrote this little script.
http://host.com/";;
18 matches
Mail list logo