Hi Andreas,
I think we're both working on the same thing, I've just changed the
function calls over to rsockets in the source instead of using the pre-load
library. It explains why we're having the exact same problem!
From what I've been able to tell the entire problem revolves around
rsockets
I have 2 issues that I can not find a solution to.
First: I am unable to stop / start any osd by command. I have deployed with
ceph-deploy on Ubuntu 13.04 and everything seems to be working find. I have 5
hosts 5 mons and 20 osds.
Using initctl list | grep ceph gives me
ceph-mds-all-starter
Hi,
The activity on our ceph cluster has gone up a lot. We are using exclusively
RBD
storage right now.
Is there a tool/technique that could be used to find out which rbd images are
receiving the most activity (something like rbdtop)?
Thanks,
Jeff
--
Hi.
I have some problems with create journal on separate disk, using ceph-deploy
osd prepare command.
When I try execute next command:
ceph-deploy osd prepare ceph001:sdaa:sda1
where:
sdaa - disk for ceph data
sda1 - partition on ssd drive for journal
I get next errors:
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Aug 12, 2013 at 03:19:04PM +0200, Jeff Moskow wrote:
Hi,
The activity on our ceph cluster has gone up a lot. We are using exclusively
RBD
storage right now.
Is there a tool/technique that could be used to find out which rbd images are
receiving the most activity (something like
On 08/12/2013 03:19 PM, Jeff Moskow wrote:
Hi,
The activity on our ceph cluster has gone up a lot. We are using exclusively
RBD
storage right now.
Is there a tool/technique that could be used to find out which rbd images are
receiving the most activity (something like rbdtop)?
Are you
Hi,
It seems my OSD processes keep crashing randomly and I don't
know why. It seems to happens when the cluster is trying to
re-balance... In normal usange I didn't notice any crash like that.
We running ceph 0.61.7 on an up to date ubuntu 12.04 (all packages
Hi Matthew,
I am not quite sure about the POLLRDHUP.
On the server side (ceph-mon), tcp_read_wait does see the
POLLHUP - which should be the indicator that the
the other side is shutting down.
I have also taken a brief look at the client side (ceph mon stat).
It initiates a shutdown - but never
Hi All,
Before go on the issue description, here is our hardware configurations:
- Physical machine * 3: each has quad-core CPU * 2, 64+ GB RAM, HDD * 12
(500GB ~ 1TB per drive; 1 for system, 11 for OSD). ceph OSD are on physical
machines.
- Each physical machine runs 5 virtual machines. One VM
Can you post more of the log? There should be a line towards the bottom
indicating the line with the failed assert. Can you also attach ceph pg
dump, ceph osd dump, ceph osd tree?
-Sam
On Mon, Aug 12, 2013 at 11:54 AM, John Wilkins john.wilk...@inktank.comwrote:
Stephane,
You should post
Did you try using ceph-deploy disk zap ceph001:sdaa first?
-Sam
On Mon, Aug 12, 2013 at 6:21 AM, Pavel Timoschenkov
pa...@bayonetteas.onmicrosoft.com wrote:
Hi.
I have some problems with create journal on separate disk, using ceph-deploy
osd prepare command.
When I try execute next command:
Can you elaborate on what behavior you are looking for?
-Sam
On Fri, Aug 9, 2013 at 4:37 AM, Georg Höllrigl
georg.hoellr...@xidras.com wrote:
Hi,
I'm using ceph 0.61.7.
When using ceph-fuse, I couldn't find a way, to only mount one pool.
Is there a way to mount a pool - or is it simply not
Can you attach the output of ceph osd tree?
Also, can you run
ceph osd getmap -o /tmp/osdmap
and attach /tmp/osdmap?
-Sam
On Fri, Aug 9, 2013 at 4:28 AM, Jeff Moskow j...@rtr.com wrote:
Thanks for the suggestion. I had tried stopping each OSD for 30 seconds,
then restarting it, waiting 2
I have referred you to someone more conversant with the details of
mkcephfs, but for dev purposes, most of us use the vstart.sh script in
src/ (http://ceph.com/docs/master/dev/).
-Sam
On Fri, Aug 9, 2013 at 2:59 AM, Nulik Nol nulik...@gmail.com wrote:
Hi,
I am configuring a single node for
Sam,
I've attached both files.
Thanks!
Jeff
On Mon, Aug 12, 2013 at 01:46:57PM -0700, Samuel Just wrote:
Can you attach the output of ceph osd tree?
Also, can you run
ceph osd getmap -o /tmp/osdmap
and attach /tmp/osdmap?
-Sam
On Fri, Aug 9, 2013 at 4:28 AM, Jeff
On 08/08/13 15:21, Craig Lewis wrote:
I've seen a couple posts here about broken clusters that had to repair
by modifing the monmap, osdmap, or the crush rules.
The old school sysadmin in me says it would be a good idea to make
backups of these 3 databases. So far though, it seems like
Are you using any kernel clients? Will osds 3,14,16 be coming back?
-Sam
On Mon, Aug 12, 2013 at 2:26 PM, Jeff Moskow j...@rtr.com wrote:
Sam,
I've attached both files.
Thanks!
Jeff
On Mon, Aug 12, 2013 at 01:46:57PM -0700, Samuel Just wrote:
Can you attach the output of
I think the docs you are looking for are
http://ceph.com/docs/master/man/8/cephfs/ (specifically the set_layout
command).
-Sam
On Thu, Aug 8, 2013 at 7:48 AM, Da Chun ng...@qq.com wrote:
Hi list,
I saw the info about data striping in
http://ceph.com/docs/master/architecture/#data-striping .
On 08/12/2013 04:49 AM, Joshua Young wrote:
I have 2 issues that I can not find a solution to.
First: I am unable to stop / start any osd by command. I have deployed
with ceph-deploy on Ubuntu 13.04 and everything seems to be working
find. I have 5 hosts 5 mons and 20 osds.
Using initctl
Can you attach the output of:
ceph -s
ceph pg dump
ceph osd dump
and run
ceph osd getmap -o /tmp/osdmap
and attach /tmp/osdmap/
-Sam
On Wed, Aug 7, 2013 at 1:58 AM, Howarth, Chris chris.howa...@citi.com wrote:
Hi,
One of our OSD disks failed on a cluster and I replaced it, but when it
Can you give a step by step account of what you did prior to the error?
-Sam
On Tue, Aug 6, 2013 at 10:52 PM, 於秀珠 yuxiu...@jovaunn.com wrote:
using the ceph-deploy to manage a existing cluster,i follow the steps in the
document ,but there is some errors that i can not gather the keys.
when i
Sam,
3, 14 and 16 have been down for a while and I'll eventually replace
those drives (I could do it now)
but didn't want to introduce more variables.
We are using RBD with Proxmox, so I think the answer about kernel
clients is yes
Jeff
On Mon, Aug 12, 2013 at 02:41:11PM
Following a discussion we had today on #ceph, I've added some extra
functionality to 'ceph-monstore-tool' to allow copying the data out of a
store into a new mon store, and can be found on branch wip-monstore-copy.
Using it as
ceph-monstore-tool --mon-store-path mon-data-dir --out
You saved me a bunch of time; I was planning to test my backup and
restore later today. Thanks!
It occurred to me that the backups won't be as useful as I thought. I'd
need to make sure that the PGs hadn't moved around after the backup was
made. If they had, I'd spend a lot of time
On 08/12/2013 10:19 AM, PJ wrote:
Hi All,
Before go on the issue description, here is our hardware configurations:
- Physical machine * 3: each has quad-core CPU * 2, 64+ GB RAM, HDD * 12
(500GB ~ 1TB per drive; 1 for system, 11 for OSD). ceph OSD are on
physical machines.
- Each physical
Hello community,
I am currently installing some backup servers with 6x3TB drives in them. I
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.
Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be
local, so I could simply create
6
i got PGs stuck long time. do not how to fix it. can some person help to
check?
Environment: Debian 7 + ceph 0.617
root@ceph-admin:~# ceph -s
health HEALTH_WARN 6 pgs stuck unclean
monmap e2: 2 mons at {a=192.168.250.15:6789/0,b=192.168.250.8:6789/0},
election epoch 8,
On 08/12/2013 06:49 PM, Dmitry Postrigan wrote:
Hello community,
I am currently installing some backup servers with 6x3TB drives in them. I
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.
Anyway, I thought what if instead of RAID-10 I use ceph? All
Joao,
(log file uploaded to http://pastebin.com/Ufrxn6fZ)
I had some good luck and some bad luck. I copied the store.db to a new monitor,
injected a modified monmap and started it up (This is all on the same host.)
Very quickly it reached quorum (as far as I can tell) but didn't respond.
30 matches
Mail list logo