Is there anyone who is hitting this? or any help on this is much appreciated.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pavan
Rallabhandi
Sent: Saturday, February 28, 2015 11:42 PM
To: ceph-us...@ceph.com
Subject: [ceph-users] RGW
Hi Robert,
it seems I have not listened well on your advice - I set osd to out,
instead of stoping it - and now instead of some ~ 3% of degraded objects,
now there is 0.000% of degraded, and arround 6% misplaced - and rebalancing
is happening again, but this is small percentage..
Do you know if
On 03/05/2015 07:14 PM, Brian Rak wrote:
Do any of the Ceph repositories run rsync? We generally mirror the
repository locally so we don't encounter any unexpected upgrades.
eu.ceph.com used to run this, but it seems to be down now.
# rsync rsync://eu.ceph.com
rsync: failed to connect to
The metadata api can do it:
GET /admin/metadata/user
Yehuda
- Original Message -
From: Joshua Weaver joshua.wea...@ctl.io
To: ceph-us...@ceph.com
Sent: Thursday, March 5, 2015 1:43:33 PM
Subject: [ceph-users] rgw admin api - users
According to the docs at
Hi,
I am newbie for ceph, and ceph-user group. Recently I have been working on
a ceph client. It worked on all the environments while when i tested on the
production, it is not able to connect to ceph.
Following are the operating system details and error. If someone has seen
this problem before,
Setting an OSD out will start the rebalance with the degraded object count.
The OSD is still alive and can participate in the relocation of the
objects. This is preferable so that you don't happen to get less the
min_size because a disk fails during the rebalance then I/O stops on the
cluster.
Thanks a lot Robert.
I have actually already tried folowing:
a) set one OSD to out (6% of data misplaced, CEPH recovered fine), stop
OSD, remove OSD from crush map (again 36% of data misplaced !!!) - then
inserted OSD back in to crushmap - and those 36% displaced objects
disappeared, of course -
Hi David,
Mind sending me the output of ceph pg dump -f json?
thanks!
Mark
On 03/05/2015 12:52 PM, David Burley wrote:
Mark,
It worked for me earlier this morning but the new rev is throwing a
traceback:
$ ceph pg dump -f json | python ./readpgdump.py pgdump_analysis.txt
dumped all in
Hello guys,
On adminops documentation that saw how to remove a bucket, but I can’t find the
URI to create one, I’d like to know if this is possible?
Regards.
Italo Santos
http://italosantos.com.br/
___
ceph-users mailing list
Hi,
I'm sorry to revive my post but I can't to solve my problems
and I see anything in the log. I have tried with Hammer version
and I found the same phenomena.
In fact, first, I have tried the same installation (ie the same
conf via puppet) of my cluster but in virtualbox environment
and I have
What did you mean when say ceph client?
The log piece that you posted seems to be about kernel that you are
using not supporting some features of ceph. Try to update you kernel if
your 'client' is Rados Block Device client.
06.03.2015 00:48, Sonal Dubey пишет:
Hi,
I am newbie for ceph, and
According to the docs at
http://docs.ceph.com/docs/master/radosgw/adminops/#get-user-info
I should be able to invoke /admin/user without a quid specified, and get a list
of users.
No matter what I try, I get a 403.
After looking at the source at github (ceph/ceph), it appears that there isn’t
- Original Message -
From: Daniel Schneller daniel.schnel...@centerdevice.com
To: ceph-users@lists.ceph.com
Sent: Tuesday, March 3, 2015 2:54:13 AM
Subject: [ceph-users] Understand RadosGW logs
Hi!
After realizing the problem with log rotation (see
Hi All,
Just a heads up after a day's experimentation.
I believe tgt with its default settings has a small write cache when
exporting a kernel mapped RBD. Doing some write tests I saw 4 times the
write throughput when using tgt aio + krbd compared to tgt with the builtin
librbd.
After
On 03/05/2015 03:40 AM, Josh Durgin wrote:
It looks like your libvirt rados user doesn't have access to whatever
pool the parent image is in:
librbd::AioRequest: write 0x7f1ec6ad6960
rbd_data.24413d1b58ba.0186 1523712~4096 should_complete: r
= -1
-1 is EPERM, for operation not
Hello Ketor,
About 1 more years ago, I need a free DFS can be used in AIX
environment as a tiered storage solution for Bank DC, that why the
project.
This project just port the CephFS in Linux kernel to AIX kernel(maybe
RBD in future), so it's a kernel mode AIX cephfs.
But I have multiple
Thank you all for all good advises and much needed documentation.
I have a lot to digest :)
Adrian
On 03/04/2015 08:17 PM, Stephen Mercier wrote:
To expand upon this, the very nature and existence of Ceph is to replace
RAID. The FS itself replicates data and handles the HA functionality
that
I use reposync to keep mine updated when needed.
Something like:
cd ~ /ceph/repos
reposync -r Ceph -c /etc/yum.repos.d/ceph.repo
reposync -r Ceph-noarch -c /etc/yum.repos.d/ceph.repo
reposync -r elrepo-kernel -c /etc/yum.repos.d/elrepo.repo
Michael Kuriger
Sr. Unix Systems Engineer
S
Hi Blair,
I've updated the script and it now (theoretically) computes optimal
crush weights based on both primary and secondary acting set OSDs. It
also attempts to show you the efficiency of equal weights vs using
weights optimized for different pools (or all pools). This is done by
Mark,
It worked for me earlier this morning but the new rev is throwing a
traceback:
$ ceph pg dump -f json | python ./readpgdump.py pgdump_analysis.txt
dumped all in format json
Traceback (most recent call last):
File ./readpgdump.py, line 294, in module
parse_json(data)
File
Do any of the Ceph repositories run rsync? We generally mirror the
repository locally so we don't encounter any unexpected upgrades.
eu.ceph.com used to run this, but it seems to be down now.
# rsync rsync://eu.ceph.com
rsync: failed to connect to eu.ceph.com: Connection refused (111)
rsync
Bump...
On 2015-03-03 10:54:13 +, Daniel Schneller said:
Hi!
After realizing the problem with log rotation (see
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/17708)
and fixing it, I now for the first time have some
meaningful (and recent) logs to look at.
While from an
Hello,
Is there some way to make the client(via RADOS API or something like
that) to get the notification of an event (for example, an OSD down)
happened in the cluster?
--
Den
___
ceph-users mailing list
ceph-users@lists.ceph.com
Thank you all for such wonderful feedback.
Thank you to John Spray for putting me on the right track. I now see
that the cephfs aspect of the project is being de-emphasised, so that
the manual deployment instructions tell how to set up the object store,
and then the cephfs is a separate issue
David,
You will need to up the limit of open files in the linux system. Check
/etc/security/limits.conf. it is explained somewhere in the docs and the
autostart scripts 'fixes' the issue for most people. When I did a manual
deploy for the same reasons you are, I ran into this too.
Robert LeBlanc
hello everyone,
recently, I have a doubt about ceph osd journal.
I use ceph-deploy to add new osd which the version is 1.4.0. And my ceph
version is 0.80.5
the /dev/sdb is a sata disk,and the /dev/sdk is a ssd disk, the sdk1
partition size is 50G.
ceph-deploy osd prepare
26 matches
Mail list logo