Solved.
ceph-authtool /tmp/ceph.mon.keyring --import-keyring
/etc/ceph/ceph.client.admin.keyring
ceph-authtool /tmp/ceph.mon.keyring --import-keyring
/etc/ceph/ceph.client.admin.keyring
The above two steps on mon node solved the issue.
Thanks!
Regards,
santhosh
On Fri, Sep 5, 2014 at 11:05
I was waiting for the schedule, topics seem to be interesting.
I'm going to register now :)
BTW, are the speeches in french or english? (As I see loic,sebastian and yann
as speakers)
- Mail original -
De: Patrick McGarry patr...@inktank.com
À: Ceph Devel ceph-de...@vger.kernel.org,
Hi,
Sorry for the lack of information yesterday, this was solved after some 30
minutes, after having reloaded/restarted all osd daemons.
Unfortunately we couldn’t pin point it to a single OSD or drive, all drives
seemed ok, some had a bit higher latency and we tried to out / in them to see
if
On Fri, 5 Sep 2014 13:46:17 +0800 Ding Dinghua wrote:
2014-09-05 13:19 GMT+08:00 Christian Balzer ch...@gol.com:
Hello,
On Fri, 5 Sep 2014 12:09:11 +0800 Ding Dinghua wrote:
Please see my comment below:
2014-09-04 21:33 GMT+08:00 Christian Balzer ch...@gol.com:
Hello,
On Fri, 5 Sep 2014 08:26:47 +0200 David wrote:
Hi,
Sorry for the lack of information yesterday, this was solved after
some 30 minutes, after having reloaded/restarted all osd daemons.
Unfortunately we couldn’t pin point it to a single OSD or drive, all
drives seemed ok, some had a
Dear CEPH ,
Urgent question, I met a FAILED assert(0 == unexpected error)
yesterday , Now i have not way to start this OSDS
I have attached my logs in the attachment, and some ceph configurations as
below
osd_pool_default_pgp_num = 300
osd_pool_default_size = 2
I think it will be english, unless the audience is 100% french speaking ;-)
On 05/09/2014 08:12, Alexandre DERUMIER wrote:
I was waiting for the schedule, topics seem to be interesting.
I'm going to register now :)
BTW, are the speeches in french or english? (As I see loic,sebastian and
Hi Christian,
On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote:
Hello,
On Thu, 4 Sep 2014 14:49:39 -0700 Craig Lewis wrote:
On Thu, Sep 4, 2014 at 9:21 AM, Dan Van Der Ster
daniel.vanders...@cern.ch wrote:
1) How often are DC S3700's failing in your deployments?
On Fri, Sep 5, 2014 at 5:46 PM, Dan Van Der Ster
daniel.vanders...@cern.ch wrote:
On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote:
You might want to look into cache pools (and dedicated SSD servers with
fast controllers and CPUs) in your test cluster and for the future.
Right
On 05 Sep 2014, at 10:30, Nigel Williams nigel.d.willi...@gmail.com wrote:
On Fri, Sep 5, 2014 at 5:46 PM, Dan Van Der Ster
daniel.vanders...@cern.ch wrote:
On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote:
You might want to look into cache pools (and dedicated SSD servers
Only time I saw such behaviour was when I was deleting a big chunk of data
from the cluster: all the client activity was reduced, the op/s were almost
non-existent and there was unjustified delays all over the cluster. But all
the disks were somewhat busy in atop/iotstat.
On 5 September 2014
On 05 Sep 2014, at 11:04, Christian Balzer ch...@gol.com wrote:
Hello Dan,
On Fri, 5 Sep 2014 07:46:12 + Dan Van Der Ster wrote:
Hi Christian,
On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote:
Hello,
On Thu, 4 Sep 2014 14:49:39 -0700 Craig Lewis wrote:
On 09/05/2014 02:16 PM, Yan, Zheng wrote:
On Fri, Sep 5, 2014 at 4:05 PM, Florent Bautista flor...@coppint.com wrote:
Firefly :) last release.
After few days, second MDS is still stopping and consuming CPU
sometimes... :)
Try restarting the stopping MDS and run ceph mds stop 1 again.
No messages in dmesg, I've updated the two clients to 3.16, we'll see if
that fixes this issue.
On Fri, Sep 5, 2014 at 12:28 AM, Yan, Zheng uker...@gmail.com wrote:
On Fri, Sep 5, 2014 at 8:42 AM, James Devine fxmul...@gmail.com wrote:
I'm using 3.13.0-35-generic on Ubuntu 14.04.1
Was
Hi,
How do you guys monitor the cluster to find disks that behave bad, or
VMs that impact the Ceph cluster?
I'm looking for something where I could get a good bird-view of
latency/throughput, that uses something easy like SNMP.
Regards,
Josef Johansson
Hi guys,
I have ceph storage firefly v. 0.8.5 on Debian server with kernel
3.2.0-4-amd64. We use ceph-fs client on Debian server with 3.2.0.4-amd64
kernel too. I had to remove HASHSPOOL flag from all pools and so it
would by posible to mount ceph on my ceph client. Everything worked good.
Hello Cephers,
We created a ceph cluster with 100 OSD, 5 MON and 1 MSD and most of the stuff
seems to be working fine but we are seeing some degrading on the osd's due to
lack of space on the osd's. Is there a way to resize the OSD without bringing
the cluster down?
--jiten
We ran into the same issue where we could not mount the filesystem on the
clients because it had 3.9. Once we upgraded the kernel on the client node, we
were able to mount it fine. FWIW, you need kernel 3.14 and above.
--jiten
On Sep 5, 2014, at 6:55 AM, James Devine fxmul...@gmail.com wrote:
Is there a way to resize the OSD without bringing the cluster down?
What is the HEALTH state of your cluster ?
If it's OK, simply replace the osd disk by a bigger one ?
- Mail original -
De: JIten Shah jshah2...@me.com
À: ceph-us...@ceph.com
Envoyé: Samedi 6 Septembre 2014
19 matches
Mail list logo