Hi Greg,
Thank you for your concern.
It seems that problem was caused by ceph-mds. While the rest of Ceph
modules have been upgraded to 0.61.8, ceph-mds was 0.56.7.
I've updated ceph-mds and cluster stabilised within few hours.
Kind regards, Serge
On 08/30/2013 08:22 PM, Gregory Farnum
Thanks a lot Josh. It be very useful.
Regards
On 31/08/13 02:58, Josh Durgin wrote:
On 08/30/2013 03:40 AM, Toni F. [ackstorm] wrote:
Sorry, wrong list
Anyway i take this oportunity to ask two questions:
Somebody knows how i can download a image or snapshot?
Cinder has no way to export
You can change the pg numbers on the fly with
ceph osd pool set {pool_name} pg_num {value}
ceph osd pool set {pool_name} pgp_num {value}
refrence: http://ceph.com/docs/master/rados/operations/pools/
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
How do you test the random behavior of the disks, what's a good setup?
If I understand ceph writes in 4M blocks I also expect a 50%/50% rw ratio of
our workloads, what else to I have to take into consideration.
Also what I not yet understand, in my performance test I get pretty nice rados
bench
Hi all
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process of
reformatting our OSDs to XFS. We have a process that works,
Am 02.09.2013 11:37, schrieb Jens-Christian Fischer:
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process of
reformatting our
Hi Jens,
On 2013-09-02 19:37, Jens-Christian Fischer wrote:
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server
kernel panics) that we could point back to btrfs. We are therefore in
the process of
Why wait for the data to migrate away? Normally you have replicas of the
whole osd data, so you can simply stop the osd, reformat the disk and restart
it again. It'll join the cluster and automatically get all data it's missing.
Of course the risk of dataloss is a bit higher during that
Hi Martin
On 2013-09-02 19:37, Jens-Christian Fischer wrote:
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process
of
We have a ceph cluster with 64 OSD (3 TB SATA) disks on 10 servers, and run an
OpenStack cluster.
We are planning to move the images of the running VM instances from the
physical machines to CephFS. Our plan is to add 10 SSDs (one in each server)
and create a pool that is backed only by these
Dimitri Maziuk пишет:
1) i read somewhere that it is recommended to have one OSD per disk in a
production environment.
is this also the maximum disk per OSD or could i use multiple disks per
OSD? and why?
you could use multiple disks for one OSD if you used some striping and
abstract
Oliver Daudey пишет:
1) i read somewhere that it is recommended to have one OSD per disk in a
production environment.
is this also the maximum disk per OSD or could i use multiple disks
per OSD? and why?
you could use multiple disks for one OSD if you used some striping and
abstract
We've installed ceph on test cluster:
3x mon, 7xOSD on 2x10k RPM SAS
Centos 6.4 ( 2.6.32-358.14.1.el6.x86_64 )
ceph 0.67.2 (also tried with 0.61.7 with same results)
And during rados bench I get very strange behaviour:
# rados bench -p pbench 100 write
sec Cur ops started finished avg
On 30 August 2013 22:13, Stefan Priebe s.pri...@profihost.ag wrote:
Yes, thats correct what i hate at this point is that you lower the ssd
speed by writing to journal, reading to journal wirting to ssd. Sadly there
is no option to disable the journal. I think for SSD this would be best.
On 08/18/2013 07:11 PM, Oliver Daudey wrote:
Hey all,
Also created on the tracker, under http://tracker.ceph.com/issues/6047
Oliver, list,
We fixed this last week. Fixes can be found on wip-6047.
We shall merge this to the mainline and the patch will be backported to
dumpling.
Thanks
Only pgp_num is listed in the reference. Though pg_num can be changed in the
same way, is there any risk to do that?
From: andreas.fu...@swisstxt.ch
To: dachun...@outlook.com; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Is it possible to change the pg number after adding
new osds?
2 days ago I increase it for one pool and trying to reduce for others. Reducing
don't work (for me? - repair freezed, but rollung back - up - is good),
increasing is good. I understand next: pgp_num is temporary parameter to change
pg_num. Data actually distributed over pgp_num, but allocated PGs
I created a pool with no replication and an RBD within that pool. I
mapped the RBD to a machine, formatted it with a file system and dumped
data on it.
Just to see what kind of trouble I can get into, I stopped the OSD the
RBD was using, marked the OSD as out, and reformatted the OSD tree.
Mr.
Hi!I'm interested into the rgw geo-replication and disaster recovery feature.
But whether those 'regisions and zones ' distributes among several different
ceph clusters or just only one?
Thank you !
|
Hi,
I install the ceph 0.56.3 on fedord 15. there is no rpm release for
fc15, so i build it form the source.
# ./autogen.sh when configure I use
# ./configure -with--radosgw
and i install the ceph and the rados sucessful.
I follow the ceph doucment
Hello All,
I have a simple test setup with 2 osd servers each 3 NICs (1Gb each):
* One for management (ssh and such)
* One for the public network (connected to ceph clients)
* One for the cluster (osd inter-connection)
I keep seeing this messages:
Aug 26 18:43:31 ceph01 ceph-osd: 2013-08-26
Hi all.
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2
version)
At first, It works perfectly. But, after I reboot one of OSD, ceph-mon
launched on port 6800 not 6789.
This is a result of 'ceph -s'
---
cluster c59d13fd-c4c9-4cd0-b2ed-b654428b3171
health
Looks like maybe your network is faulty. The crc error means the OSD
received a message with a checksum that didn't match. The dropped message
indicates that the connection (in this case to a client) has failed
(probably because of the bad crc?) and so it's dropping the outgoing message.
This is
23 matches
Mail list logo