Re: [ceph-users] librados Java support for rados_lock_exclusive()

2016-08-24 Thread Wido den Hollander
Hi Dan, Not on my list currently. I think it's not that difficult, but I never got around to maintaining rados-java and keep up with librados. You are more then welcome to send a Pull Request though! https://github.com/ceph/rados-java/pulls Wido > Op 24 augustus 2016 om 21:58 schreef Dan

Re: [ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Lazuardi Nasution
Hi, My setup is default. How can I know if there is path restriction? If it is path restriction, why operation with small size if file is OK? Best regards, On Aug 25, 2016 08:12, "Yan, Zheng" wrote: > have you enabled path restriction on cephfs? > > On Thu, Aug 25, 2016 at

Re: [ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Lazuardi Nasution
Hi Gregory, Since I have mounted it with /etc/fstab, of course it is kernel client. What log do you mean? I cannot find anything related on dmesg. Best regards, On Aug 25, 2016 00:46, "Gregory Farnum" wrote: > On Wed, Aug 24, 2016 at 10:25 AM, Lazuardi Nasution >

Re: [ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Yan, Zheng
have you enabled path restriction on cephfs? On Thu, Aug 25, 2016 at 1:25 AM, Lazuardi Nasution wrote: > Hi, > > I have problem with CephFS on writing big size file. I have found that my > OpenStack Nova backup was not working after I change the rbd based mount of >

Re: [ceph-users] ceph-fuse "Transport endpoint is not connected" on Jewel 10.2.2

2016-08-24 Thread Goncalo Borges
Hi Dennis... We use ceph-fuse in 10.2.2 and we saw two main issues with it immediately after upgrading from Infernalis to Jewel. In our case, we are enabling ceph-fuse in a heavily used Linux cluster, and our users complained about the mount points becoming unavailable some time after their

Re: [ceph-users] Cephfs quota implement

2016-08-24 Thread Gregory Farnum
On Wed, Aug 3, 2016 at 4:37 AM, Daleep Singh Bais wrote: > Dear all, > > Further to my Cephfs testing, I am trying to put quota on the mount I have > done on client end. I am getting error message when querying the same. > > ceph-fuse fuse.ceph-fuse 2.8T 5.5G 2.8T

Re: [ceph-users] ceph rbd and pool quotas

2016-08-24 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 11:27 PM, Thomas wrote: > Hi Ilya, > > Thanks for the speedy reply - unfortunately increasing the quota doesn't > help, the process keeps being stuck forever. Or do you mean with kernel 4.7 > this would work after upping the quota? Correct. Thanks,

Re: [ceph-users] ceph rbd and pool quotas

2016-08-24 Thread Thomas
Hi Ilya, Thanks for the speedy reply - unfortunately increasing the quota doesn't help, the process keeps being stuck forever. Or do you mean with kernel 4.7 this would work after upping the quota? Cheers, Thomas On 25/08/16 09:16, Ilya Dryomov wrote: On Wed, Aug 24, 2016 at 11:13 PM,

Re: [ceph-users] ceph rbd and pool quotas

2016-08-24 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 11:13 PM, Thomas wrote: > Hi guys, > > quick question in regards to ceph -> rbd -> quotas per pool. I'd like to set > a quota with max_bytes of a pool so that I can limit the amount a ceph > client can use, like so: > > ceph osd pool set-quota pool1

[ceph-users] ceph rbd and pool quotas

2016-08-24 Thread Thomas
Hi guys, quick question in regards to ceph -> rbd -> quotas per pool. I'd like to set a quota with max_bytes of a pool so that I can limit the amount a ceph client can use, like so: ceph osd pool set-quota pool1 max_bytes $(( 1024 * 1024 * 100 * 5)) This is all working, e.g., if data gets

Re: [ceph-users] CephFS: Future Internetworking File System?

2016-08-24 Thread Gregory Farnum
On Fri, Aug 12, 2016 at 9:35 PM, Matthew Walster wrote: > I've been following Ceph (and in particular CephFS) for some time now, and > glad to see it coming on in leaps and bounds! > > I've been running a small OpenAFS Cell for a while now, and it's really > starting to show

Re: [ceph-users] Ceph auth key generation algorithm documentation

2016-08-24 Thread Gregory Farnum
On Tue, Aug 23, 2016 at 7:24 AM, Heller, Chris wrote: > I’d like to generate keys for ceph external to any system which would have > ceph-authtool. > > Looking over the ceph website and googling have turned up nothing. > > > > Is the ceph auth key generation algorithm

Re: [ceph-users] CephFS + cache tiering in Jewel

2016-08-24 Thread Gregory Farnum
On Tue, Aug 23, 2016 at 7:50 AM, Burkhard Linke wrote: > Hi, > > the Firefly and Hammer releases did not support transparent usage of cache > tiering in CephFS. The cache tier itself had to be specified as data pool, > thus preventing on-the-fly

Re: [ceph-users] how to debug pg inconsistent state - no ioerrors seen

2016-08-24 Thread Gregory Farnum
On Tue, Aug 9, 2016 at 11:15 PM, Goncalo Borges wrote: > Hi Greg... > > Thanks for replying, You seem omnipresent in all ceph/cephfs issues! > > Can you please confirm that, in Jewel, 'ceph pg repair' simply copies the pg > contents of the primary osd to the others?

[ceph-users] librados Java support for rados_lock_exclusive()

2016-08-24 Thread Dan Jakubiec
Hello, Is anyone planning to implement support for Rados locks in the Java API anytime soon? Thanks, -- Dan J ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph Tech Talk - Tomorrow -- Unified CI: Transitioning Away from Gitbuilders

2016-08-24 Thread Gregory Meno
Just a reminder that the August Ceph Tech Talk is on for tomorrow @ 1p EDT. http://ceph.com/ceph-tech-talks/ Unified CI: Transitioning Away from Gitbuilders === While Ceph development has relied on "gitbuilders", the release process has always been

Re: [ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Gregory Farnum
On Wed, Aug 24, 2016 at 10:25 AM, Lazuardi Nasution wrote: > Hi, > > I have problem with CephFS on writing big size file. I have found that my > OpenStack Nova backup was not working after I change the rbd based mount of > /var/lib/nova/instances/snapshots to CephFS based

Re: [ceph-users] ceph-fuse "Transport endpoint is not connected" on Jewel 10.2.2

2016-08-24 Thread Gregory Farnum
On Wed, Aug 24, 2016 at 5:28 AM, Dennis Kramer (DT) wrote: > Hi all, > > Running ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) on > Ubuntu 16.04LTS. > > Currently I have the weirdest thing, I have a bunch of linux clients, mostly > debian based (Ubuntu/Mint).

[ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Lazuardi Nasution
Hi, I have problem with CephFS on writing big size file. I have found that my OpenStack Nova backup was not working after I change the rbd based mount of /var/lib/nova/instances/snapshots to CephFS based (mounted on /etc/fstab on all Nova compute nodes). I couldn't relize the cause until I tried

Re: [ceph-users] phantom osd.0 in osd tree

2016-08-24 Thread Reed Dier
As someone else mentioned, ‘ceph osd rm 0’ took it out of the osd tree.Crush map attached. Odd seeing deviceN in the devices block in the osd numbering holes in my cluster. Assume that is just a placeholder until it gets backfilled with an osd upon expansion.Thanks,Reed# begin crush map tunable

Re: [ceph-users] Very slow S3 sync with big number of object.

2016-08-24 Thread jan hugo prins
Below is a part of a sync transfer. First it is fast, and then suddently it gets really slow. The stats for the bucket are: { "bucket": "testbucket", "pool": "nl.demo.rgw.buckets.data", "index_pool": "nl.demo.rgw.buckets.index", "id":

Re: [ceph-users] Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

2016-08-24 Thread Ivan Grcic
Hi Ilya, there you go, and thank you for your time. BTW should one get a crushmap from osdmap doing something like this: osdmaptool --export-crush /tmp/crushmap /tmp/osdmap crushtool -c crushmap -o crushmap.3518 Until now I was just creating/compiling crushmaps, havent played with osd maps

Re: [ceph-users] Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

2016-08-24 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 4:56 PM, Ivan Grcic wrote: > Dear Cephers, > > For some time now I am running a small Ceph cluster made of 4OSD + > 1MON Servers, and evaluating possible Ceph usages in our storage > infrastructure. Until few weeks ago I was running Hammer release, >

[ceph-users] Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

2016-08-24 Thread Ivan Grcic
Dear Cephers, For some time now I am running a small Ceph cluster made of 4OSD + 1MON Servers, and evaluating possible Ceph usages in our storage infrastructure. Until few weeks ago I was running Hammer release, using mostly RBD Clients mounting replicated pool images. Everything was running

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-24 Thread Jason Dillaman
I guess I missed the fact that you were using the "rbd_id" object -- notifications are sent against image header "rbd_header.". A notification is only sent prior to creating a snapshot when the exclusive-lock feature is used and an active client owns the lock. Otherwise, you'll just receive an

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-24 Thread Nick Fisk
> -Original Message- > From: Jason Dillaman [mailto:jdill...@redhat.com] > Sent: 23 August 2016 13:23 > To: Nick Fisk > Cc: ceph-users > Subject: Re: [ceph-users] RBD Watch Notify for snapshots > > Looks good. Since you are re-using the RBD

Re: [ceph-users] latest ceph build questions

2016-08-24 Thread Ilya Dryomov
On Fri, Aug 19, 2016 at 1:21 PM, Dzianis Kahanovich wrote: > Related to fresh ceph build troubles, main question: > Are cmake now preferred? Or legacy gnu make still supported too? No, autotools files are about to be removed from the master branch. Older releases will continue

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-24 Thread Mehmet
Hello Guys, the issue still exists :( If we run a "ceph pg deep-scrub 0.223" nearly all VMs stop for a while (blocked requests). - we already replaced the OSDs (SAS Disks - journal on NVMe) - Removed OSDs so that acting set for pg 0.223 has changed - checked the filesystem on the acting OSDs

[ceph-users] Main reason to use Ceph object store compared to filesystem?

2016-08-24 Thread Jasmine Lognnes
Dear all =) If the premise is that there is not a filesystem on top of a Ceph cluster, what is the killer argument from a developers point to store you data as objects through a REST API compared to a classic filesystem? Is it any different than one of the many NoSQL / key/value databases?

[ceph-users] ceph-fuse "Transport endpoint is not connected" on Jewel 10.2.2

2016-08-24 Thread Dennis Kramer (DT)
Hi all, Running ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) on Ubuntu 16.04LTS. Currently I have the weirdest thing, I have a bunch of linux clients, mostly debian based (Ubuntu/Mint). They all use version 10.2.2 of ceph-fuse. I'm running cephfs since Hammer without any

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-24 Thread Nick Fisk
> -Original Message- > From: Wido den Hollander [mailto:w...@42on.com] > Sent: 24 August 2016 07:08 > To: Ilya Dryomov ; n...@fisk.me.uk > Cc: ceph-users > Subject: RE: [ceph-users] udev rule to set readahead on Ceph RBD's > > > > Op 23

[ceph-users] Finding Monitors using SRV DNS record

2016-08-24 Thread Wido den Hollander
Hi Ricardo (and rest), I see that http://tracker.ceph.com/issues/14527 / https://github.com/ceph/ceph/pull/7741 has been merged which would allow clients and daemons to find their Monitors through DNS. mon_dns_srv_name is set to ceph-mon by default, so if I'm correct, this would work? Let's

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-24 Thread Wido den Hollander
> Op 23 augustus 2016 om 22:24 schreef Nick Fisk : > > > > > > -Original Message- > > From: Wido den Hollander [mailto:w...@42on.com] > > Sent: 23 August 2016 19:45 > > To: Ilya Dryomov ; Nick Fisk > > Cc: ceph-users

Re: [ceph-users] phantom osd.0 in osd tree

2016-08-24 Thread Burkhard Linke
Hi, On 08/23/2016 08:19 PM, Reed Dier wrote: Trying to hunt down a mystery osd populated in the osd tree. Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1 at time of deployment, but since upgraded to 10.2.2. For reference, mons and mds do not live on the osd nodes,