[ceph-users] goofy results for df

2014-02-21 Thread Markus Goldberg
Hi, this is ceph 0.77, Ubuntu 13.04 (ceph-server and ceph client) df-command gives goofy results: /root@bd-a:/mnt/myceph/Backup/bs3/tapes#// //root@bd-a:/mnt/myceph/Backup/bs3/tapes# df -h .// //Dateisystem GröÃe Benutzt Verf. Verw% Eingehängt auf// //xxx.xxx.xxx.xxx:6789:/ 60T

[ceph-users] Lisbon Ceph Meetup

2014-02-21 Thread Joao Eduardo Luis
Dear all, In the spirit of the Ceph User Committee's onjectives, a long overdue meetup for Lisbon has been created [1]. There's no date for the first get-together just yet, as that should be scheduled to optimize the number of participants. If any of the list members are in Lisbon,

Re: [ceph-users] goofy results for df

2014-02-21 Thread Yan, Zheng
I think the result reported by df is correct. It's likely you have lots of sparse files in cephfs. For sparse files, cephfs increase the used space by the full file size. See http://ceph.com/docs/next/dev/differences-from-posix/ Yan, Zheng On Fri, Feb 21, 2014 at 6:13 PM, Markus Goldberg

[ceph-users] CephFS to provide distributed access read/write

2014-02-21 Thread Listas@Adminlinux
Hi ! I have failover clusters (IMAP service) with 2 members configured with Ubuntu + Drbd + Ext4. My IMAP clusters works fine with ~ 50k email accounts. See design here: http://adminlinux.com.br/my_imap_cluster_design.txt I would like to use a distributed architecture of the filesystem to

Re: [ceph-users] Ceph GET latency

2014-02-21 Thread GuangYang
Thanks Greg for the response, my comments inline… Thanks, Guang On Feb 20, 2014, at 11:16 PM, Gregory Farnum g...@inktank.com wrote: On Tue, Feb 18, 2014 at 7:24 AM, Guang Yang yguan...@yahoo.com wrote: Hi ceph-users, We are using Ceph (radosgw) to store user generated images, as GET latency

Re: [ceph-users] [Ceph] Failure in osd creation

2014-02-21 Thread eric mourgaya
Hi Ghislain, Try to erase all keyring files and after exec ceph-deploy gatherkey mon_host before trying to create your new osd! :-) 2014-02-19 18:26 GMT+01:00 ghislain.cheval...@orange.com: Hi all, I'd like to submit a strange behavior... Context : lab platform CEPH emperor

Re: [ceph-users] CephFS and slow requests

2014-02-21 Thread Dan van der Ster
Hi Greg, Yes, this still happens after the updatedb fix. [root@xxx dan]# mount ... zzz:6789:/ on /mnt/ceph type ceph (name=cephfs,key=client.cephfs) [root@xxx dan]# pwd /mnt/ceph/dan [root@xxx dan]# dd if=/dev/zero of=yyy bs=4M count=2000 2000+0 records in 2000+0 records out 8388608000 bytes

[ceph-users] How does Ceph deal with OSDs that have been away for a while?

2014-02-21 Thread Tim Bishop
I'm wondering how Ceph deals with OSDs that have been away for a while. Do they need to be completely rebuilt, or does it know which objects are good and which need to go? I know Ceph handles well the situation of an OSD going away, and rebalances etc to maintain the required redundancy levels.

Re: [ceph-users] goofy results for df

2014-02-21 Thread Markus Goldberg
Hi, no, it's sure that the backup-files are so big. The output of the du-command is correct. The files were rsynced from an other system, which is not cephfs. Markus Am 21.02.2014 13:34, schrieb Yan, Zheng: I think the result reported by df is correct. It's likely you have lots of sparse

Re: [ceph-users] How does Ceph deal with OSDs that have been away for a while?

2014-02-21 Thread Gregory Farnum
It depends on how long ago (in terms of data writes) it disappeared. Each PG has a log of the changes that have been made (by default I think it's 3000? Maybe just 1k), and if an OSD goes away and comes back while the logs still overlap it will just sync up the changed objects. Otherwise it has to

Re: [ceph-users] goofy results for df

2014-02-21 Thread Gregory Farnum
I haven't done the math, but it's probably a result of how the df command interprets the output of the statfs syscall. We changed the fr_size and block_size units we report to make it work more consistently across different systems recently; I don't know if that change was before or after the

Re: [ceph-users] How does Ceph deal with OSDs that have been away for a while?

2014-02-21 Thread Tim Bishop
Thanks Greg. Can I just confirm, does it do a full backfill automatically in the case where the log no longer overlaps? I guess the key question is - do I have to worry about it, or will it always do the right thing? Tim. On Fri, Feb 21, 2014 at 11:57:09AM -0800, Gregory Farnum wrote: It

Re: [ceph-users] Swift APIs not authenticating Rados gateway !!!

2014-02-21 Thread Liu, Larry
Srinivasa, I pretty much think your problem is your fedora systems are missing some right lib files. I just got s3 working on my ubuntu raring setup. Just follow exactly what is written on http://ceph.com/docs/master/install/install-ceph-gateway/ . Still a question to everyone else: for

Re: [ceph-users] How does Ceph deal with OSDs that have been away for a while?

2014-02-21 Thread Gregory Farnum
You don't have to worry about it; the OSDs will always just do the right thing. :) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Fri, Feb 21, 2014 at 12:40 PM, Tim Bishop tim-li...@bishnet.net wrote: Thanks Greg. Can I just confirm, does it do a full backfill

Re: [ceph-users] CephFS and slow requests

2014-02-21 Thread Yan, Zheng
On Fri, Jan 31, 2014 at 9:52 PM, Arne Wiebalck arne.wieba...@cern.ch wrote: Hi, We observe that we can easily create slow requests with a simple dd on CephFS: -- [root@p05153026953834 dd]# dd if=/dev/zero of=xxx bs=4M count=1000 1000+0 records in 1000+0 records out 4194304000 bytes (4.2

Re: [ceph-users] CephFS and slow requests

2014-02-21 Thread Yan, Zheng
On Sat, Feb 22, 2014 at 12:04 AM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi Greg, Yes, this still happens after the updatedb fix. [root@xxx dan]# mount ... zzz:6789:/ on /mnt/ceph type ceph (name=cephfs,key=client.cephfs) [root@xxx dan]# pwd /mnt/ceph/dan [root@xxx dan]# dd