On 03/09/13 15:25, Yehuda Sadeh wrote:
Boto prog:
#!/usr/bin/python
import boto
import boto.s3.connection
access_key = 'X5E5BXJHCZGGII3HAWBB',
secret_key = '' # redacted
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key =
Why wait for the data to migrate away? Normally you have replicas of the
whole osd data, so you can simply stop the osd, reformat the disk and restart
it again. It'll join the cluster and automatically get all data it's missing.
Of course the risk of dataloss is a bit higher during that
|
hi,
I want to konw how to get the hash-of-header-and-secret, i read the
http://ceph.com/docs/master/radosgw/s3/authentication/ , but i still not
understand , I hope I can get example . if i want PUT a bucket for the user
{user_id:johndoe,rados_uid:0,display_name:John
hello!
I've tried with wip-6078 and git dumpling builds and got the same error
during OPTIONS request.
curl -v -X OPTIONS -H 'Access-Control-Request-Method: PUT' -H Origin:
http://X.pl; http://static3.X.pl/
OPTIONS / HTTP/1.1
User-Agent: curl/7.31.0
Host: static3.X.pl
Accept:
Thanks for your answer.
Regards
Dominik
On Aug 30, 2013 4:59 PM, Yehuda Sadeh yeh...@inktank.com wrote:
On Fri, Aug 30, 2013 at 7:44 AM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
(echo -n 'GET /dysk/files/test.test%
On 09/03/2013 02:02 AM, 이주헌 wrote:
Hi all.
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling
0.67.2 version)
At first, It works perfectly. But, after I reboot one of OSD, ceph-mon
launched on port 6800 not 6789.
This has been a recurrent issue I've been completely unable to
Time skews happen frequently when the systems running monitors are
restarted.With ntp server configured, the time skew between systems will be
fixed over some time. But the ceph monitors won't find it at once if there are
no time check messages at that time, so the ceph status will be still
Am 03.09.2013 14:56, schrieb Joao Eduardo Luis:
On 09/03/2013 02:02 AM, 이주헌 wrote:
Hi all.
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling
0.67.2 version)
At first, It works perfectly. But, after I reboot one of OSD, ceph-mon
launched on port 6800 not 6789.
This has been
On Mon, 2 Sep 2013, Jens-Christian Fischer wrote:
Hi all
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process
of
On Sun, 1 Sep 2013, Gaylord Holder wrote:
I created a pool with no replication and an RBD within that pool. I mapped
the RBD to a machine, formatted it with a file system and dumped data on it.
Just to see what kind of trouble I can get into, I stopped the OSD the RBD was
using, marked
Awesome Sage!
I knew I had lost data. I'm trying to find out what will happen when
the worst happens (like the ceph administer is an idiot).
So those PGs are hanging around in a OSD/pool somewhere with some kind
of reference count and they just need to be recreated?
Thanks again for
On Tue, Sep 3, 2013 at 3:40 AM, Pawel Stefanski pejo...@gmail.com wrote:
hello!
I've tried with wip-6078 and git dumpling builds and got the same error
during OPTIONS request.
curl -v -X OPTIONS -H 'Access-Control-Request-Method: PUT' -H Origin:
http://X.pl; http://static3.X.pl/
On 03.09.2013, at 16:27, Sage Weil s...@inktank.com wrote:
ceph osd create # this should give you back the same osd number as the one
you just removed
OSD=`ceph osd create` # may or may not be the same osd id
good point - so far it has been good to us!
umount ${PART}1
parted $PART
We are testing radosgw with cyberduck, so far we see the following issues
1. In apache error log for each file put we see:
[Tue Sep 03 17:35:24 2013] [warn] FastCGI: 193.218.104.138 PUT
https://193.218.100.131/test/tesfile04.iso auth AWS ***
[Tue Sep 03 17:35:24 2013] [warn] FastCGI: JJJ
On 09/03/2013 02:35 PM, Da Chun Ng wrote:
How often are the time check messages sent between monitors?
The monitors will perform a timecheck every 5 minutes, by default.
You can adjust this interval using 'mon timecheck interval = SECS'.
On Tue, Sep 3, 2013 at 7:20 AM, Maciej Gałkiewicz
mac...@shellycloud.com wrote:
Hi
I have recently discovered that one of my pg is in inconsistent state. I
have checked filesystem on osd.3, and re-run deep-scrub few times. Osd uses
xfs. Any suggestions how to fix it?
You can use repair
On Mon, Sep 2, 2013 at 5:09 AM, Jens-Christian Fischer
jens-christian.fisc...@switch.ch wrote:
We have a ceph cluster with 64 OSD (3 TB SATA) disks on 10 servers, and run
an OpenStack cluster.
We are planning to move the images of the running VM instances from the
physical machines to CephFS.
On Tue, Sep 3, 2013 at 9:13 AM, Fuchs, Andreas (SwissTXT)
andreas.fu...@swisstxt.ch wrote:
We are testing radosgw with cyberduck, so far we see the following issues
1. In apache error log for each file put we see:
[Tue Sep 03 17:35:24 2013] [warn] FastCGI: 193.218.104.138 PUT
Now I'm trying to clear the stale PGs. I've tried removing the OSD from the
crush maps, the OSD lists etc, without any luck.
Note that this means that you destroyed all copies of those 3 PGs, which
means this experiment lost data.
You can make ceph recreate the PGs (empty!) with
On Tue, Sep 3, 2013 at 2:30 PM, Derek Yarnell de...@umiacs.umd.edu wrote:
Hi,
So say a usera has full control (and is the owner) of a bucket in s3 and
gives userb 'FULL_CONTROL' on the bucket. Userb writes a file and it
seems that by default the ACL for that key is going to be 'FULL_CONTROL'
Hello Guys:
I am working with ceph nowadys and i want to use the ceph object gateway.
So, i tried to use s3 API to fulfill my needs. I use the api for create a
baucet.
PUT /{bucket} HTTP/1.1
Host: kp
x-amz-acl: public-read-write
Authorization: AWS {access-key}:{hash-of-header-and-secret}
21 matches
Mail list logo