Hi.
There is a solution for Ceph in XenServer. With the help of my engineer
Mark, we developed a simple patch which allows you to search and attach
RBD image on XenServer. We create LVHD over the RBD (not RBD per VDI
mapping yet), so it is far from ideal, but its a good start. The process
of
Thanks to all for responses. Great thread with a lot of info.
I will go with the 3 partitions on Kingstone SDD for 3 OSDs on each node.
Thanks
Jiri
On 30/09/2015 00:38, Lionel Bouton wrote:
Hi,
Le 29/09/2015 13:32, Jiri Kanicky a écrit :
Hi Lionel.
Thank you for your reply. In this case I
you
Jiri
On 29/09/2015 21:10, Lionel Bouton wrote:
Le 29/09/2015 07:29, Jiri Kanicky a écrit :
Hi,
Is it possible to create journal in directory as explained here:
http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_cluster
Yes, the general idea (stop
Jiri
On 21/08/2015 10:12, Steven McDonald wrote:
Hi Jiri,
On Thu, 20 Aug 2015 11:55:55 +1000
Jiri Kanicky wrote:
We are experimenting with an idea to run OSD nodes in XenServer VMs.
We believe this could provide better flexibility, backups for the
nodes etc.
Could you expand on this? As
Hi all,
We are experimenting with an idea to run OSD nodes in XenServer VMs. We
believe this could provide better flexibility, backups for the nodes etc.
For example:
Xenserver with 4 HDDs dedicated for Ceph.
We would introduce 1 VM (OSD node) with raw/direct access to 4 HDDs or 2
VMs (2 OSD
Hi,
I can answer this myself. It was a kernel. After upgrade to lates Debian
Jessie 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u2 (2015-07-17)
x86_64 GNU/Linux. Everything started to work as normal.
Thanks :)
On 6/08/2015 22:38, Jiri Kanicky wrote:
Hi,
I am trying to mount my CephFS
Hi,
I am trying to mount my CephFS and getting the following message. It was
all working previously, but after power failure I am not able to mount
it anymore (Debian Jessie).
cephadmin@maverick:/etc/ceph$ sudo mount -t ceph
ceph1.allsupp.corp,ceph2.allsupp.corp:6789:/ /mnt/cephdata/ -o
nam
Hi,
I just added new monitor (MON). "$ ceph status" shows the monitor in the
quorum, but the new monitor is not shown in /etc/ceph/ceph.conf. I am
wondering what role the /etc/ceph/ceph.conf plays? Do I need to manually
edit the file on each node and add the monitors?
In addition, there are
Hi,
BTW, is there a way how to achieve redundancy over multiple OSDs in one
box by changing CRUSH map?
Thank you
Jiri
On 20/01/2015 13:37, Jiri Kanicky wrote:
Hi,
Thanks for the reply. That clarifies it. I thought that the redundancy
can be achieved with multiple OSDs (like multiple disks
o ceph osd pool get size
assuming default setup:
ceph osd pool get rbd size
returns: 3
On 20 January 2015 at 10:51, Jiri Kanicky <mailto:j...@ganomi.com>> wrote:
Hi,
I just would like to clarify if I should expect degraded PGs with
11 OSD in one node. I am not sure if a s
20 January 2015 at 14:10, Jiri Kanicky <mailto:j...@ganomi.com>> wrote:
Hi,
BTW, is there a way how to achieve redundancy over multiple OSDs
in one box by changing CRUSH map?
I asked that same question myself a few weeks back :)
The answer was yes - but fiddly and why wou
Hi,
I just would like to clarify if I should expect degraded PGs with 11 OSD
in one node. I am not sure if a setup with 3 MON and 1 OSD (11 disks)
nodes allows me to have healthy cluster.
$ sudo ceph osd pool create test 512
pool 'test' created
$ sudo ceph status
cluster 4e77327a-118d-45
Hi,
I have upgraded Firefly to Giant on Debian Wheezy and it went without
any problems.
Jiri
On 16/01/2015 06:49, Erik McCormick wrote:
Hello all,
I've got an existing Firefly cluster on Centos 7 which I deployed with
ceph-deploy. In the latest version of ceph-deploy, it refuses to
handl
Hi George,
List disks available:
# $ ceph-deploy disk list {node-name [node-name]...}
Add OSD using osd create:
# $ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
Or you can use the manual steps to prepare and activate disk described
at
http://ceph.com/docs/master/start/quick-c
that I did not help much.
-Jiri
On 9/01/2015 20:23, Nico Schottelius wrote:
Good morning Jiri,
sure, let me catch up on this:
- Kernel 3.16
- ceph: 0.80.7
- fs: xfs
- os: debian (backports) (1x)/ubuntu (2x)
Cheers,
Nico
Jiri Kanicky [Fri, Jan 09, 2015 at 10:44:33AM +1100]:
Hi Nico.
If you are
Hi Nico.
If you are experiencing such issues it would be good if you provide more info
about your deployment: ceph version, kernel versions, OS, filesystem btrfs/xfs.
Thx Jiri
- Reply message -
From: "Nico Schottelius"
To:
Subject: [ceph-users] Is ceph production ready? [was: Ceph PG
Hi Max,
Thanks for this info.
I am planing to use CephFS (ceph version 0.87) at home, because its more
convenient than NFS over RBD. I dont have large environment; about 20TB,
so hopefully it will hold.
I backup all important data just in case. :)
Thank you.
Jiri
On 29/12/2014 21:09, Thoma
Hi,
I have been experiencing issues with several PGs which remained in
inconsistent state (I use BTRFS). "ceph pg repair" is not able to repair
them. The only way I can delete the corresponding file, which is causing
the issue (see logs bellow) from the OSDs. This however means loss of data.
Hi.
Do you know how to tell that the option "filestore btrfs snap = false"
is set?
Thx Jiri
On 5/01/2015 02:25, Jiri Kanicky wrote:
Hi.
I have been experiencing same issues on both nodes over the past 2
days (never both nodes at the same time). It seems the issue occurs
after
5/01/2015 01:33, Dyweni - Ceph-Users wrote:
On 2015-01-04 08:21, Jiri Kanicky wrote:
More googling took me to the following post:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040279.html
Linux 3.14.1 is affected by serious Btrfs regression(s) that were
fixed in
later
Hi.
Correction. My SWAP is 3GB on SSD disk. I dont use th nodes for client
stuff.
Thx Jiri
On 5/01/2015 01:21, Jiri Kanicky wrote:
Hi,
Here is my memory output. I use HP Microservers with 2GB RAM. Swap is
500MB on SSD disk.
cephadmin@ceph1:~$ free
total used
this node? Do you have swap configured on this
node?
On 2015-01-04 07:12, Jiri Kanicky wrote:
Hi,
My OSDs with btrfs are down on one node. I found the cluster in this
state:
cephadmin@ceph1:~$ ceph osd tree
# idweight type name up/down reweight
-1 10.88 root
Hi,
My OSDs with btrfs are down on one node. I found the cluster in this state:
cephadmin@ceph1:~$ ceph osd tree
# idweight type name up/down reweight
-1 10.88 root default
-2 5.44host ceph1
0 2.72osd.0 down0
1 2.72
the state of the cluster.
Thx Jiri
On 1/01/2015 15:46, Jiri Kanicky wrote:
Hi,
I have:
- 2 monitors, one on each node
- 4 OSDs, two on each node
- 2 MDS, one on each node
Yes, all pools are set with size=2 and min_size=1
cephadmin@ceph1:~$ ceph osd dump
epoch 88
fsid bce2ff4d-e03b
ceph status" or "ceph health" should report at
least something in such state. Its quite weird that everything stops...
Thank you
Jiri
On 1/01/2015 15:51, Lindsay Mathieson wrote:
On Thu, 1 Jan 2015 03:46:33 PM Jiri Kanicky wrote:
Hi,
I have:
- 2 monitors, one on each node
- 4
up in weight 1 up_from 81 up_thru 86 down_at 72
last_clean_interval [20 ,71) 192.168.30.22:6804/3414
10.1.1.22:6802/3414 10.1.1.22:6803/3414 192.168.30.
22:6805/3414 exists,up 25e62059-6392-4a69-99c9-214ae335004
Thx Jiri
On 1/01/2015 15:21, Lindsay Mathieson wrote:
On
Hi,
Is it possible to achieve redundancy with 2 nodes only?
cephadmin@ceph1:~$ ceph osd tree
# idweight type name up/down reweight
-1 10.88 root default
-2 5.44host ceph1
0 2.72osd.0 up 1
1 2.72osd.1
Hi.
I have got the same message in Debian Jessie, while the CephFS mounts
and works fine.
Jiri.
On 18/12/2014 01:00, John Spray wrote:
Hmm, from a quick google it appears you are not the only one who has
seen this symptom with mount.ceph. Our mtab code appears to have
diverged a bit from th
/ 11158 GB avail
256 active+clean
Thanks for the help!
Jiri
On 28/12/2014 16:59, Christian Balzer wrote:
Hello Jiri,
On Sun, 28 Dec 2014 16:14:04 +1100 Jiri Kanicky wrote:
Hi Christian.
Thank you for your comments again. Very helpful.
I will try to fix the current poo
n ready at this
time).
Regards,
Christian
You are right with the round up. I forgot about that.
Thanks for your help. Much appreciated.
Jiri
- Reply message -
From: "Christian Balzer"
To:
Cc: "Jiri Kanicky"
Subject: [ceph-users] HEALTH_WARN 29 pgs degraded; 2
Hi,
I just build my CEPH cluster but having problems with the health of the
cluster.
Here are few details:
- I followed the ceph documentation.
- I used btrfs filesystem for all OSDs
- I did not set "osd pool default size = 2 " as I thought that if I have
2 nodes + 4 OSDs, I can leave default
Hi.
Do I have to install sudo in Debian Wheezy to deploy CEPH succesfully? I
dont normally use sudo.
Thank you
Jiri
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
32 matches
Mail list logo