Re: [ceph-users] MDS aborted after recovery and active, FAILED assert (r >=0)

2015-01-16 Thread Yan, Zheng
On Sat, Jan 17, 2015 at 11:47 AM, Lindsay Mathieson wrote: > On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote: >> In Ceph world 0.72.2 is ancient en pretty old. If you want to play with >> CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18 > > Does the kernel version

Re: [ceph-users] MDS aborted after recovery and active, FAILED assert (r >=0)

2015-01-16 Thread Lindsay Mathieson
On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote: > In Ceph world 0.72.2 is ancient en pretty old. If you want to play with > CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18 Does the kernel version matter if you are using ceph-fuse? ___

[ceph-users] v0.80.8 Firefly released

2015-01-16 Thread Sage Weil
This is a long-awaited bugfix release for firefly. It has several imporant (but relatively rare) OSD peering fixes, performance issues when snapshots are trimmed, several RGW fixes, a paxos corner case fix, and some packaging updates. We recommend that all users for v0.80.x firefly upgrade whe

Re: [ceph-users] Better way to use osd's of different size

2015-01-16 Thread John Spray
On Wed, Jan 14, 2015 at 3:36 PM, Межов Игорь Александрович wrote: > What is the more right way to do it: > > - replace 12x1tb drives with 12x2tb drives, so we will have 2 nodes full of > 2tb drives and > > other nodes remains in 12x1tb confifg > > - or replace 1tb to 2tb drives in more unify way,

Re: [ceph-users] two mount points, two diffrent data

2015-01-16 Thread Michael Kuriger
You’re using a file system on 2 hosts that is not cluster aware. Metadata written on hosta is not sent to hostb in this case. You may be interested in looking at cephfs for this use case. Michael Kuriger mk7...@yp.com 818-649-7235 MikeKuriger (IM) From: Rafał Michalak mailto:rafa...@gmail.co

Re: [ceph-users] MDS aborted after recovery and active, FAILED assert (r >=0)

2015-01-16 Thread John Spray
It has just been pointed out to me that you can also workaround this issue on your existing system by increasing the osd_max_write_size setting on your OSDs (default 90MB) to something higher, but still smaller than your osd journal size. That might get you on a path to having an accessible filesy

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread Roland Giesler
On 14 January 2015 at 12:08, JM wrote: > Hi Roland, > > You should tune your Ceph Crushmap with a custom rule in order to do that > (write first on s3 and then to others). This custom rule will be applied > then to your proxmox pool. > (what you want to do is only interesting if you run VM from h

Re: [ceph-users] ceph-deploy dependency errors on fc20 with firefly

2015-01-16 Thread Noah Watkins
Thanks! I'll give this a shot. On Thu, Jan 8, 2015 at 8:51 AM, Travis Rhoden wrote: > Hi Noah, > > The root cause has been found. Please see > http://tracker.ceph.com/issues/10476 for details. > > In short, it's an issue between RPM obsoletes and yum priorities > plugin. Final solution is pendi

[ceph-users] Total number PGs using multiple pools

2015-01-16 Thread Italo Santos
Hello, Into placement groups documentation (http://ceph.com/docs/giant/rados/operations/placement-groups/) we have the message bellow: “When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per pool with the number of placement

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread Roland Giesler
On 16 January 2015 at 17:15, Gregory Farnum wrote: > > I have set up 4 machines in a cluster. When I created the Windows 2008 > > server VM on S1 (I corrected my first email: I have three Sunfire X > series > > servers, S1, S2, S3) since S1 has 36GB of RAM en 8 x 300GB SAS drives, it > > was run

Re: [ceph-users] got "XmlParseFailure" when libs3 client accessing radosgw object gateway

2015-01-16 Thread Liu, Xuezhao
Thanks for the hints. My original configuration is with " rgw print continue = false", but does not work. Just now I tested to change it be "true" and restart radosgw and apache2 service, strangely to see that everything can work now. Best, Xuezhao > > This sounds like you're having trouble wi

[ceph-users] problem for remove files in cephfs

2015-01-16 Thread Daniel Takatori Ohara
Hi, I have a problem for remove one file in cephfs. With the command ls, all the arguments show me with ???. *ls: cannot access refseq/source_step2: No such file or directory* *total 0* *drwxrwxr-x 1 dtohara BioInfoHSL Users0 Jan 15 15:01 .* *drwxrwxr-x 1 dtohara BioInfoHSL Users 3.8G Jan 15

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread Gregory Farnum
On Fri, Jan 16, 2015 at 2:52 AM, Roland Giesler wrote: > On 14 January 2015 at 21:46, Gregory Farnum wrote: >> >> On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler >> wrote: >> > I have a 4 node ceph cluster, but the disks are not equally distributed >> > across all machines (they are substantiall

Re: [ceph-users] two mount points, two diffrent data

2015-01-16 Thread Lindsay Mathieson
On Wed, 14 Jan 2015 02:20:21 PM Rafał Michalak wrote: > Why data not replicating on mounting fs ? > I try with filesystems ext4 and xfs > The data is visible only when unmounted and mounted again Because you are not using a cluster aware filesystem - the respective mounts don't know when changes

[ceph-users] НА: Better way to use osd's of different size

2015-01-16 Thread Межов Игорь Александрович
Thanks! Of course, I know about osd weights and ability to adjust them to make distribution more-or-less unified. And we use ceph-deploy to bring up osds and already noticed, that weights of different sized osds are choose proportionally their sizes. But the question is slightly about diffe

Re: [ceph-users] Better way to use osd's of different size

2015-01-16 Thread Udo Lembke
Hi Megov, you should weight the OSD so it's represent the size (like an weight of 3.68 for an 4TB HDD). cephdeploy do this automaticly. Nevertheless also with the correct weight the disk was not filled in equal distribution. For that purposes you can use reweight for single OSDs, or automaticly wi

[ceph-users] v0.91 released

2015-01-16 Thread Sage Weil
We are quickly approaching the Hammer feature freeze but have a few more dev releases to go before we get there. The headline items are subtree-based quota support in CephFS (ceph-fuse/libcephfs client support only for now), a rewrite of the watch/notify librados API used by RBD and RGW, OSDMa

Re: [ceph-users] radosgw-agent failed to parse

2015-01-16 Thread ghislain.chevalier
HI all, Context : Ubuntu 14.04 TLS firefly 0.80.7 I recently encountered the same issue as described below. Maybe I missed something between July and January… I found that the http request wasn't correctly built by /usr/lib/python2.7/dist-packages/radosgw_agent/client.py I did the changes b

Re: [ceph-users] MDS aborted after recovery and active, FAILED assert (r >=0)

2015-01-16 Thread Mohd Bazli Ab Karim
Agree. I was about to upgrade to 0.90, but has postponed it due to this error. Any chance for me to recover it first before upgrading it? Thanks Wido. Regards, Bazli -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Wido den

Re: [ceph-users] MDS aborted after recovery and active, FAILED assert (r >=0)

2015-01-16 Thread John Spray
Hmm, upgrading should help here, as the problematic data structure (anchortable) no longer exists in the latest version. I haven't checked, but hopefully we don't try to write it during upgrades. The bug you're hitting is more or less the same as a similar one we have with the sessiontable in the

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread Roland Giesler
On 14 January 2015 at 21:46, Gregory Farnum wrote: > On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler > wrote: > > I have a 4 node ceph cluster, but the disks are not equally distributed > > across all machines (they are substantially different from each other) > > > > One machine has 12 x 1TB SA

Re: [ceph-users] two mount points, two diffrent data

2015-01-16 Thread Robert Sander
On 14.01.2015 14:20, Rafał Michalak wrote: > > #node1 > mount /dev/rbd/rbd/test /mnt > > #node2 > mount /dev/rbd/rbd/test /mnt If you want to mount a filesystem on one block device onto multiple clients, the filesystem has to be clustered, e.g. OCFS2. A "normal" local filesystem like ext4 or XF

[ceph-users] MDS aborted after recovery and active, FAILED assert (r >=0)

2015-01-16 Thread Mohd Bazli Ab Karim
Dear Ceph-Users, Ceph-Devel, Apologize me if you get double post of this email. I am running a ceph cluster version 0.72.2 and one MDS (in fact, it's 3, 2 down and only 1 up) at the moment. Plus I have one CephFS client mounted to it. Now, the MDS always get aborted after recovery and active fo

Re: [ceph-users] got "XmlParseFailure" when libs3 client accessing radosgw object gateway

2015-01-16 Thread Yehuda Sadeh
On Wed, Jan 14, 2015 at 7:27 PM, Liu, Xuezhao wrote: > Thanks for the replying. > > After disable the default site (a2dissite 000-default), I can use libs3's > commander s3 to create/list bucket, get object also works. > > But put object failed: > > root@xuezhaoUbuntu74:~# s3 -u put bucket11/seqd

[ceph-users] CEPH Expansion

2015-01-16 Thread Georgios Dimitrakakis
Hi all! I would like to expand our CEPH Cluster and add a second OSD node. In this node I will have ten 4TB disks dedicated to CEPH. What is the proper way of putting them in the already available CEPH node? I guess that the first thing to do is to prepare them with ceph-deploy and mark the

Re: [ceph-users] got "XmlParseFailure" when libs3 client accessing radosgw object gateway

2015-01-16 Thread Liu, Xuezhao
Thanks for the replying. After disable the default site (a2dissite 000-default), I can use libs3's commander s3 to create/list bucket, get object also works. But put object failed: root@xuezhaoUbuntu74:~# s3 -u put bucket11/seqdata filename=seqdata it hangs forever, and one the object gateway

[ceph-users] rbd cp vs rbd snap flatten

2015-01-16 Thread Fabian Zimmermann
Hi, if I want to clone a running vm-hdd, would it be enough to "cp" or do I have to "snap, protect, flatten, unprotect, rm" the snapshot to get a as consistent as possible clone? Or: Does cp use a internal snapshot while copying the blocks? Thanks, Fabian __

Re: [ceph-users] Problem with Rados gateway

2015-01-16 Thread Yehuda Sadeh
2015-01-15 1:08 GMT-08:00 Walter Valenti : > > > > > > > - Messaggio originale - >> Da: Yehuda Sadeh >> A: Walter Valenti >> Cc: "ceph-users@lists.ceph.com" >> Inviato: Martedì 13 Gennaio 2015 1:13 >> Oggetto: Re: [ceph-users] Problem with Rados gateway >> >>T ry setting 'rgw print conti

Re: [ceph-users] MDS aborted after recovery and active, FAILED assert (r >=0)

2015-01-16 Thread Wido den Hollander
On 01/16/2015 08:37 AM, Mohd Bazli Ab Karim wrote: > Dear Ceph-Users, Ceph-Devel, > > Apologize me if you get double post of this email. > > I am running a ceph cluster version 0.72.2 and one MDS (in fact, it's 3, 2 > down and only 1 up) at the moment. > Plus I have one CephFS client mounted to

Re: [ceph-users] Problem with Rados gateway

2015-01-16 Thread Walter Valenti
- Messaggio originale - > Da: Yehuda Sadeh > A: Walter Valenti > Cc: "ceph-users@lists.ceph.com" > Inviato: Martedì 13 Gennaio 2015 1:13 > Oggetto: Re: [ceph-users] Problem with Rados gateway > >T ry setting 'rgw print continue = false' in your ceph.conf. > > Yehuda Thanks, but

[ceph-users] Is it possible to compile and use ceph with Raspberry Pi single-board computers?

2015-01-16 Thread Prof. Dr. Christian Baun
Hi all, I try to compile and use Ceph on a cluster of Raspberry Pi single-board computers with Raspbian as operating system. I tried it this way: wget http://ceph.com/download/ceph-0.91.tar.bz2 tar -xvjf ceph-0.91.tar.bz2 cd ceph-0.91 ./autogen.sh ./configure --without-tcmalloc make -j2 But res

Re: [ceph-users] Part 2: ssd osd fails often with "FAILED assert(soid < scrubber.start || soid >= scrubber.end)"

2015-01-16 Thread Loic Dachary
On 14/01/2015 18:33, Udo Lembke wrote: > Hi Loic, > thanks for the answer. I hope it's not like in > http://tracker.ceph.com/issues/8747 where the issue happens with an > patched version if understand right. http://tracker.ceph.com/issues/8747 is a duplicate of http://tracker.ceph.com/issues/80

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread Roland Giesler
So you can see my server names and their osd's too... # idweighttype nameup/downreweight -111.13root default -28.14host h1 1 0.9osd.1 up1 3 0.9osd.3 up1 4 0.9osd.4 up1 50.68

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread JM
# Get the compiled crushmap root@server01:~# ceph osd getcrushmap -o /tmp/myfirstcrushmap # Decompile the compiled crushmap above root@server01:~# crushtool -d /tmp/myfirstcrushmap -o /tmp/myfirstcrushmap.txt then give us your /tmp/myfirstcrushmap.txt file.. :) 2015-01-14 17:36 GMT+01:00 Roland

Re: [ceph-users] Part 2: ssd osd fails often with "FAILED assert(soid < scrubber.start || soid >= scrubber.end)"

2015-01-16 Thread Udo Lembke
Hi Loic, thanks for the answer. I hope it's not like in http://tracker.ceph.com/issues/8747 where the issue happens with an patched version if understand right. So I must only wait few month ;-) for an backport... Udo Am 14.01.2015 09:40, schrieb Loic Dachary: > Hi, > > This is http://tracker.c

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-16 Thread Jake Young
Yes, it's active/active and I found that VMWare can switch from path to path with no issues or service impact. I posted some config files here: github.com/jak3kaj/misc One set is from my LIO nodes, both the primary and secondary configs so you can see what I needed to make unique. The other set

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread Roland Giesler
On 14 January 2015 at 12:08, JM wrote: > Hi Roland, > > You should tune your Ceph Crushmap with a custom rule in order to do that > (write first on s3 and then to others). This custom rule will be applied > then to your proxmox pool. > (what you want to do is only interesting if you run VM from h

[ceph-users] cold-storage tuning Ceph

2015-01-16 Thread Martin Millnert
Hello list, I'm currently trying to understand what I can do with Ceph to optimize it for a cold-storage (write-once, read-very-rarely) like scenario, trying to compare cost against LTO-6 tape. There is a single main objective: - minimal cost/GB/month of operations (including power, DC) To achi

[ceph-users] subscribe

2015-01-16 Thread wireless
subscribe ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cephfs modification time

2015-01-16 Thread 严正
I tracked down the bug. Please try the attached patch Regards Yan, Zheng patch Description: Binary data > 在 2015年1月13日,07:40,Gregory Farnum 写道: > > Zheng, this looks like a kernel client issue to me, or else something > funny is going on with the cap flushing and the timestamps (note how >

Re: [ceph-users] Spark/Mesos on top of Ceph/Btrfs

2015-01-16 Thread wireless
On 01/14/2015 07:37 AM, John Spray wrote: On Tue, Jan 13, 2015 at 1:25 PM, James wrote: I was wondering if anyone has Mesos running on top of Ceph? I want to test/use Ceph if lieu of HDFS. You might be interested in http://ceph.com/docs/master/cephfs/hadoop/ It allows you to expose CephFS to

Re: [ceph-users] How to tell a VM to write more local ceph nodes than to the network.

2015-01-16 Thread JM
Hi Roland, You should tune your Ceph Crushmap with a custom rule in order to do that (write first on s3 and then to others). This custom rule will be applied then to your proxmox pool. (what you want to do is only interesting if you run VM from host s3) Can you give us your crushmap ? 2015-01-

[ceph-users] help,ceph stuck in pg creating and never end

2015-01-16 Thread wrong
Hi all, My CEPH cluster is stuck when re-creating the PG ,following are the information Version 0.9 OS centos 7.0 How can I do a further analyze ,thanks. ceph@dev140 ~]$ ceph -s cluster 84d08382-9f31-4d1b-9870-75e6975f69fe health HEALTH_WARN 9 pgs stuck inactive

[ceph-users] Adding monitors to osd nodes failed

2015-01-16 Thread Hoc Phan
Hi, I am adding two monitors in osd0 (node2) and osd1 (node3) in the "ADD MONITORS" step http://ceph.com/docs/master/start/quick-ceph-deploy/ But it failed to create 2 monitors http://pastebin.com/aSPwKs0H Can you help why? ___ ceph-users mailing list ce