Re: [ceph-users] Question regarding rbd cache

2015-03-03 Thread Jason Dillaman
. The whole object would not be written to the OSDs unless you wrote data to the whole object. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Xu (Simon) Chen" To: ceph-users@lists.ceph.com Sent: Wednesday, February 25,

Re: [ceph-users] qemu-kvm and cloned rbd image

2015-03-03 Thread Jason Dillaman
/projects/rbd/issues? Thanks, -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "koukou73gr" To: ceph-users@lists.ceph.com Sent: Monday, March 2, 2015 7:16:08 AM Subject: [ceph-users] qemu-kvm and cloned rbd image Hello

Re: [ceph-users] import-diff requires snapshot exists?

2015-03-03 Thread Jason Dillaman
** rbd/small and backup/small are now consistent through snap2. import-diff automatically created backup/small@snap2 after importing all changes. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Steve Anthony" To:

Re: [ceph-users] Rbd image's data deletion

2015-03-04 Thread Jason Dillaman
An RBD image is split up into (by default 4MB) objects within the OSDs. When you delete an RBD image, all the objects associated with the image are removed from the OSDs. The objects are not securely erased from the OSDs if that is what you are asking. -- Jason Dillaman Red Hat dilla

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Jason Dillaman
s rbd_directory/rbd_children" to see the data within the files. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Matthew Monaco" To: ceph-users@lists.ceph.com Sent: Sunday, April 12, 2015 10:57:46 PM Subject: [ceph-use

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Jason Dillaman
Yes, when you flatten an image, the snapshots will remain associated to the original parent. This is a side-effect from how librbd handles CoW with clones. There is an open RBD feature request to add support for flattening snapshots as well. -- Jason Dillaman Red Hat dilla

Re: [ceph-users] rbd: incorrect metadata

2015-04-14 Thread Jason Dillaman
ldren object so that librbd no longer thinks any image is a child of another. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Matthew Monaco" To: "Jason Dillaman" Cc: ceph-users@lists.ceph.com Sent: Monday, Apri

Re: [ceph-users] hammer (0.94.1) - "image must support layering(38) Function not implemented" on v2 image

2015-04-20 Thread Jason Dillaman
Can you add "debug rbd = 20" your ceph.conf, re-run the command, and provide a link to the generated librbd log messages? Thanks, -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Nikola Ciprich" To: ceph-users

Re: [ceph-users] hammer (0.94.1) - "image must support layering(38) Function not implemented" on v2 image

2015-04-20 Thread Jason Dillaman
'--image-features' when creating the image? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Nikola Ciprich" To: "Jason Dillaman" Cc: ceph-users@lists.ceph.com Sent: Monday, April 20, 2015 12:41:26 PM

Re: [ceph-users] Use object-map Feature on existing rbd images ?

2015-04-29 Thread Jason Dillaman
into Hammer at some point in the future. Therefore, I would recommend waiting for the full toolset to become available. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Christoph Adomeit" To: ceph-users@lists.ceph.com Sent: Tuesda

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-30 Thread Jason Dillaman
The issue appears to be tracked with the following BZ for RHEL 7: https://bugzilla.redhat.com/show_bug.cgi?id=1187533 -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Wido den Hollander" To: "Somnath Ro

Re: [ceph-users] wrong diff-export format description

2015-05-07 Thread Jason Dillaman
You are correct -- it is little endian like the other values. I'll open a ticket to correct the document. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Ultral" To: ceph-us...@ceph.com Sent: Thursday, May 7,

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-08 Thread Jason Dillaman
two snapshots and no trim operations released your changes back? If you diff from move2db24-20150428 to HEAD, do you see all your changes? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Ultral" To: "ceph-users&qu

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-12 Thread Jason Dillaman
a few kilobyes of deltas)? Also, would it be possible for you to create a new, test image in the same pool, snapshot it, use 'rbd bench-write' to generate some data, and then verify if export-diff is properly working against the new image? -- Jason Dillaman Red Hat dilla..

Re: [ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-13 Thread Jason Dillaman
/master/install/get-packages/#add-ceph-development -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Pavel V. Kaygorodov" To: "Tuomas Juntunen" Cc: "ceph-users" Sent: Tuesday, May 12, 2015 3:55:21 PM Subjec

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-14 Thread Jason Dillaman
e your issues on Giant and was unable to recreate it. I would normally ask for a log dump with 'debug rbd = 20', but given the size of your image, that log will be astronomically large. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message ---

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Jason Dillaman
th/to/my/new/ceph.conf" QEMU parameter where the RBD cache is explicitly disabled [2]. [1] http://git.qemu.org/?p=qemu.git;a=blob;f=block/rbd.c;h=fbe87e035b12aab2e96093922a83a3545738b68f;hb=HEAD#l478 [2] http://ceph.com/docs/master/rbd/qemu-rbd/#usage -- Jason Dillaman Red

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Jason Dillaman
> On Mon, Jun 8, 2015 at 10:43 PM, Josh Durgin wrote: > > On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote: > >> > >> Hi, > > looking at the latest version of QEMU, > >> > >> > >> It's seem that it's was already this behaviour since the add of rbd_cache > >> parsing in rbd.c by josh in 2

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-09 Thread Jason Dillaman
> In the past we've hit some performance issues with RBD cache that we've > fixed, but we've never really tried pushing a single VM beyond 40+K read > IOPS in testing (or at least I never have). I suspect there's a couple > of possibilities as to why it might be slower, but perhaps joshd can > chi

Re: [ceph-users] Best method to limit snapshot/clone space overhead

2015-07-24 Thread Jason Dillaman
cs (or can you gather any statistics) that indicate the percentage of block-size, zeroed extents within the clone images' RADOS objects? If there is a large amount of waste, it might be possible / worthwhile to optimize how RBD handles copy-on-write operations against the clone. -- Jas

Re: [ceph-users] Best method to limit snapshot/clone space overhead

2015-07-27 Thread Jason Dillaman
will locate all associated RADOS objects, download the objects one at a time, and perform a scan for fully zeroed blocks. It's not the most CPU efficient script, but it should get the job done. [1] http://fpaste.org/248755/43803526/ -- Jason Dillaman Red Hat Ceph Storage Engineering dilla

Re: [ceph-users] rbd rename snaps?

2015-08-12 Thread Jason Dillaman
There currently is no mechanism to rename snapshots without hex editing the RBD image header data structure. I created a new Ceph feature request [1] to add this ability in the future. [1] http://tracker.ceph.com/issues/12678 -- Jason Dillaman Red Hat Ceph Storage Engineering dilla

Re: [ceph-users] Rados: Undefined symbol error

2015-08-21 Thread Jason Dillaman
It sounds like you have rados CLI tool from an earlier Ceph release (< Hammer) installed and it is attempting to use the librados shared library from a newer (>= Hammer) version of Ceph. Jason - Original Message - > From: "Aakanksha Pudipeddi-SSI" > To: ceph-us...@ceph.com > Sent:

Re: [ceph-users] rbd du

2015-08-24 Thread Jason Dillaman
That rbd CLI command is a new feature that will be included with the upcoming infernalis release. In the meantime, you can use this approach [1] to estimate your RBD image usage. [1] http://ceph.com/planet/real-size-of-a-ceph-rbd-image/ -- Jason Dillaman Red Hat Ceph Storage Engineering

Re: [ceph-users] How to disable object-map and exclusive features ?

2015-08-31 Thread Jason Dillaman
Unfortunately, the tool the dynamically enable/disable image features (rbd feature disable ) was added during the Infernalis development cycle. Therefore, in the short-term you would need to recreate the images via export/import or clone/flatten. There are several object map / exclusive loc

Re: [ceph-users] RBD command crash & can't delete volume!

2014-11-07 Thread Jason Dillaman
that issue? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Chu Duc Minh" To: ceph-de...@vger.kernel.org, "ceph-users@lists.ceph.com >> ceph-users@lists.ceph.com" Sent: Friday, November 7, 2014 7:05:5

Re: [ceph-users] RBD - possible to query "used space" of images/clones ?

2014-11-07 Thread Jason Dillaman
In the longer term, there is an in-progress RBD feature request to add a new RBD command to see image disk usage: http://tracker.ceph.com/issues/7746 -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Sébastien Han" T

Re: [ceph-users] How to disable object-map and exclusive features ?

2015-09-04 Thread Jason Dillaman
> I have a coredump with the size of 1200M compressed . > > Where shall i put the dump ? > I believe you can use the ceph-post-file utility [1] to upload the core and your current package list to ceph.com. Jason [1] http://ceph.com/docs/master/man/8/ceph-post-file/ __

Re: [ceph-users] crash on rbd bench-write

2015-09-04 Thread Jason Dillaman
Any particular reason why you have the image mounted via the kernel client while performing a benchmark? Not to say this is the reason for the crash, but strange since 'rbd bench-write' will not test the kernel IO speed since it uses the user-mode library. Are you able to test bench-write with

Re: [ceph-users] crash on rbd bench-write

2015-09-08 Thread Jason Dillaman
> The client version is what was installed by the ceph-deploy install > ceph-client command. Via the debian-hammer repo. Per the quickstart doc. > Are you saying I need to install a different client version somehow? You listed the version as 0.80.10 which is a Ceph Firefly release -- Hammer is 0.

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-21 Thread Jason Dillaman
This is usually indicative of the same tracepoint event being included by both a static and dynamic library. See the following thread regarding this issue within Ceph when LTTng-ust was first integrated [1]. Since I don't have any insight into your application, are you somehow linking against

Re: [ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Jason Dillaman
ifying the image while at the same time not crippling other use cases. librbd also supports cooperative exclusive lock transfer, which is used in the case of qemu VM migrations where the image needs to be opened R/W by two clients at the same time. -- Jason Dillaman - Original Mes

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
You can run the program under 'gdb' with a breakpoint on the 'abort' function to catch the program's abnormal exit. Assuming you have debug symbols installed, you should hopefully be able to see which probe is being re-registered. -- Jason Dillaman - Orig

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph project only due to the fact that EPEL 7 doesn't provide the required packages [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1235461 -- Jason Dillaman - Original Message - > From: "Paul Man

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
> On 22/09/15 17:46, Jason Dillaman wrote: > > As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph > > project only due to the fact that EPEL 7 doesn't provide the required > > packages [1]. > > interesting. so basically our program migh

Re: [ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Jason Dillaman
ourself. The new exclusive-lock feature is managed via 'rbd feature enable/disable' commands and does ensure that only the current lock owner can manipulate the RBD image. It was introduced to support the RBD object map feature (which can track which backing RADOS objects are in-use in order

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-23 Thread Jason Dillaman
It looks like the issue you are experiencing was fixed in the Infernalis/master branches [1]. I've opened a new tracker ticket to backport the fix to Hammer [2]. -- Jason Dillaman [1] https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e [2]

Re: [ceph-users] rbd map failing for image with exclusive-lock feature

2015-09-24 Thread Jason Dillaman
approach via "rbd lock add/remove" to verify that no other client has the image mounted before attempting to mount it locally. -- Jason Dillaman - Original Message - > From: "Allen Liao" > To: ceph-users@lists.ceph.com > Sent: Wednesday, September 23, 201

Re: [ceph-users] possibility to delete all zeros

2015-10-02 Thread Jason Dillaman
The following advice assumes these images don't have associated snapshots (since keeping the non-sparse snapshots will keep utilizing the storage space): Depending on how you have your images set up, you could snapshot and clone the images, flatten the newly created clone, and delete the origina

Re: [ceph-users] Annoying libust warning on ceph reload

2015-10-08 Thread Jason Dillaman
isn't enabled. [1] https://github.com/ceph/ceph/pull/6135 -- Jason Dillaman - Original Message - > From: "Ken Dreyer" > To: "Goncalo Borges" > Cc: ceph-users@lists.ceph.com > Sent: Thursday, October 8, 2015 11:58:27 AM > Subject: Re: [ceph-users] A

Re: [ceph-users] how to get cow usage of a clone

2015-10-09 Thread Jason Dillaman
mental, you could install the infernalis-based rbd tools from the Ceph gitbuilder [1] into a sandbox environment and use the tool against your pre-infernalis cluster. [1] http://ceph.com/gitbuilder.cgi -- Jason Dillaman - Original Message - > From: "Corin Langosch" >

Re: [ceph-users] How expensive are 'rbd ls' and 'rbd snap ls' calls?

2015-10-12 Thread Jason Dillaman
o the object, so they will be read via LevelDB or RocksDB (depending on your configuration) within the object's PG's OSD. -- Jason Dillaman - Original Message - > From: "Allen Liao" > To: ceph-users@lists.ceph.com > Sent: Monday, October 12, 2015 2:52

Re: [ceph-users] Ceph journal - isn't it a bit redundant sometimes?

2015-10-14 Thread Jason Dillaman
ite operations by decoupling objects from the underlying filesystem's actual storage path. [1] https://github.com/ceph/ceph/blob/master/doc/rados/configuration/journal-ref.rst -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph journal - isn't it a bit redundant sometimes?

2015-10-19 Thread Jason Dillaman
uncate, overwrite, etc). -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How ceph client abort IO

2015-10-20 Thread Jason Dillaman
There is no such interface currently on the librados / OSD side to abort IO operations. Can you provide some background on your use-case for aborting in-flight IOs? -- Jason Dillaman - Original Message - > From: "min fang" > To: ceph-users@lists.ceph.com > Se

Re: [ceph-users] rbd export hangs / does nothing without regular drop_cache

2015-10-20 Thread Jason Dillaman
Can you provide more details on your setup and how you are running the rbd export? If clearing the pagecache, dentries, and inodes solves the issue, it sounds like it's outside of Ceph (unless you are exporting to a CephFS or krbd mount point). -- Jason Dillaman - Original Me

Re: [ceph-users] How ceph client abort IO

2015-10-21 Thread Jason Dillaman
> On Tue, 20 Oct 2015, Jason Dillaman wrote: > > There is no such interface currently on the librados / OSD side to abort > > IO operations. Can you provide some background on your use-case for > > aborting in-flight IOs? > > The internal Objecter has a cancel interf

Re: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4

2015-10-21 Thread Jason Dillaman
] http://tracker.ceph.com/issues/13559 -- Jason Dillaman - Original Message - > From: "Andrei Mikhailovsky" > To: ceph-us...@ceph.com > Sent: Wednesday, October 21, 2015 8:17:39 AM > Subject: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4 > Hello

Re: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4

2015-10-21 Thread Jason Dillaman
command-line properties [1]. If you have "rbd cache = true" in your ceph.conf, it would override "cache=none" in your qemu command-line. [1] https://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg03078.html -- Jason Dillaman

Re: [ceph-users] how to understand deep flatten implementation

2015-10-22 Thread Jason Dillaman
afe to detach a clone from a parent image even if snapshots exist due to the changes to copyup. -- Jason Dillaman - Original Message - > From: "Zhongyan Gu" > To: dilla...@redhat.com > Sent: Thursday, October 22, 2015 5:11:56 AM > Subject: how to understand deep

Re: [ceph-users] how to understand deep flatten implementation

2015-10-23 Thread Jason Dillaman
ter flatten, child > snapshot still has parent snap info? > overlap: 1024 MB Because deep-flatten wasn't enabled on the clone. > Another question is since deep-flatten operations are applied to cloned > image, why we need to create p

Re: [ceph-users] Not possible to remove cache tier with RBDs open?

2015-10-26 Thread Jason Dillaman
would immediately race to re-establish the lost watch/notify connection before you could disassociate the cache tier. -- Jason Dillaman - Original Message - > From: "Robert LeBlanc" > To: ceph-users@lists.ceph.com > Sent: Monday, October 26, 2015 12:22:06 PM > Subject

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-27 Thread Jason Dillaman
> Hi Jason dillaman > Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when > I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID > flag. > When I create a rbd with “—image-features = 13 ” , we enable object-map > featu

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-28 Thread Jason Dillaman
r its been enabled. -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Core dump while getting a volume real size with a python script

2015-10-29 Thread Jason Dillaman
It sounds like you ran into this issue [1]. It's been fixed in upstream master and infernalis branches, but the backport is still awaiting release on hammer. [1] http://tracker.ceph.com/issues/12885 -- Jason Dillaman - Original Message - > From: "Giuseppe Civitella&

Re: [ceph-users] rbd hang

2015-10-29 Thread Jason Dillaman
I don't see the read request hitting the wire, so I am thinking your client cannot talk to the primary PG for the 'rb.0.16cf.238e1f29.' object. Try adding "debug objecter = 20" to your configuration to get more details. -- Jason Dillaman - Orig

Re: [ceph-users] segmentation fault when using librbd interface

2015-11-02 Thread Jason Dillaman
I'd recommend running your program through valgrind first to see if something pops out immediately. -- Jason Dillaman - Original Message - > From: "min fang" > To: ceph-users@lists.ceph.com > Sent: Saturday, October 31, 2015 10:43:22 PM > Subject: Re

Re: [ceph-users] Cloudstack agent crashed JVM with exception in librbd

2015-11-02 Thread Jason Dillaman
Most likely not going to be related to 13045 since you aren't actively exporting an image diff. The most likely problem is that the RADOS IO context is being closed prior to closing the RBD image. -- Jason Dillaman - Original Message - > From: "Voloshanenko Igor&

Re: [ceph-users] Cloudstack agent crashed JVM with exception in librbd

2015-11-02 Thread Jason Dillaman
I can't say that I know too much about Cloudstack's integration with RBD to offer much assistance. Perhaps if Cloudstack is receiving an exception for something, it is not properly handling object lifetimes in this case. -- Jason Dillaman - Original Message

Re: [ceph-users] Can snapshot of image still be used while flattening the image?

2015-11-04 Thread Jason Dillaman
-- Jason Dillaman - Original Message - > From: "Jackie" > To: ceph-users@lists.ceph.com > Sent: Tuesday, November 3, 2015 8:47:19 PM > Subject: [ceph-users] Can snapshot of image still be used while flattening > the image? > > Hi experts, >

Re: [ceph-users] Can snapshot of image still be used while flattening the image?

2015-11-04 Thread Jason Dillaman
all users of image2 (and its descendants) that its parent has been removed. If you had a clone of image2 open at the time, the clone of image2 would then know it would no longer need to access image1 since the link from image1 to image2 was removed. -- Jason Dillaman > - Origi

Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Jason Dillaman
/wiki/Clustered_SCSI_target_using_RBD -- Jason Dillaman - Original Message - > From: "Gaetan SLONGO" > To: ceph-users@lists.ceph.com > Sent: Tuesday, November 3, 2015 10:00:59 AM > Subject: [ceph-users] iSCSI over RDB is a good idea ? > > Dear Ceph

Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-05 Thread Jason Dillaman
, it appears that oVirt might even have some development to containerize a small Cinder/Glance OpenStack setup [2]. [1] https://www.youtube.com/watch?v=elEkGfjLITs [2] http://www.ovirt.org/CinderGlance_Docker_Integration -- Jason Dillaman Red Hat Ceph Storage Engineering dilla...@redhat.com

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
Can you retry with 'rbd --rbd-cache=false -p images export joe /root/joe.raw'? -- Jason Dillaman - Original Message - > From: "Joe Ryner" > To: "Jason Dillaman" > Cc: ceph-us...@ceph.com > Sent: Thursday, November 5, 2015 4:14:28 PM &g

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
request than your cache can allocate. [1] http://tracker.ceph.com/issues/13388 -- Jason Dillaman - Original Message - > From: "Joe Ryner" > To: "Jason Dillaman" > Cc: ceph-us...@ceph.com > Sent: Thursday, November 5, 2015 4:24:29 PM > Subject: Re: [

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
On the bright side, at least your week of export-related pain should result in a decent speed boost when your clients get 64MB of cache instead of 64B. -- Jason Dillaman - Original Message - > From: "Joe Ryner" > To: "Jason Dillaman" > Cc: ceph-us

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Jason Dillaman
volume internal to the VM. -- Jason Dillaman - Original Message - > From: "Lazuardi Nasution" > To: ceph-users@lists.ceph.com > Sent: Sunday, November 8, 2015 12:34:16 PM > Subject: [ceph-users] Multiple Cache Pool with Single Storage Pool > Hi, > I&#

Re: [ceph-users] python binding - snap rollback - progress reporting

2015-11-09 Thread Jason Dillaman
. This is actually what librbd does internally for the C interface. -- Jason Dillaman - Original Message - > From: "Nikola Ciprich" > To: "ceph-users" > Sent: Sunday, November 8, 2015 4:27:13 AM > Subject: [ceph-users] python binding - snap rollback

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Jason Dillaman
H map. You are correct that by using a local (host) persistent cache, you have effectively removed the ability to safely live-migrate. -- Jason Dillaman - Original Message - > From: "Lazuardi Nasution" > To: "Jason Dillaman" > Cc: ceph-users@lists.ceph.co

Re: [ceph-users] rbd create => seg fault

2015-11-12 Thread Jason Dillaman
I've seen this issue before when you (somehow) mix-and-match librbd, librados, and rbd builds on the same machine. The packaging should prevent you from mixing versions, but perhaps somehow you have different package versions installed. -- Jason Dillaman - Original Me

Re: [ceph-users] After flattening the children image, snapshot still can not be unprotected

2015-11-19 Thread Jason Dillaman
Does child image "images/0a38b10d-2184-40fc-82b8-8bbd459d62d2" have snapshots? -- Jason Dillaman - Original Message - > From: "Jackie" > To: ceph-users@lists.ceph.com > Sent: Thursday, November 19, 2015 12:05:12 AM > Subject: [ceph-users] Aft

Re: [ceph-users] librbd - threads grow with each Image object

2015-11-23 Thread Jason Dillaman
Couldn't hurt to open a feature request for this on the tracker. -- Jason Dillaman - Original Message - > From: "Haomai Wang" > To: "Allen Liao" > Cc: ceph-users@lists.ceph.com > Sent: Saturday, November 21, 2015 11:57:11 AM > Subject: Re:

Re: [ceph-users] rbd_inst.create

2015-11-30 Thread Jason Dillaman
RBD_FEATURE_STRIPINGV2 = 2 RBD_FEATURE_EXCLUSIVE_LOCK = 4 RBD_FEATURE_OBJECT_MAP = 8 RBD_FEATURE_FAST_DIFF = 16 RBD_FEATURE_DEEP_FLATTEN = 32 RBD_FEATURE_JOURNALING = 64 -- Jason Dillaman - Original Message - > From: "Gregory Farnum" > To: "NEVEU Stephane"

Re: [ceph-users] rbd_inst.create

2015-12-07 Thread Jason Dillaman
lock_exclusive() / lock_shared() methods are not related to image watchers. Instead, it is tied to the advisory locking mechanism -- and list_lockers() can be used to query who has a lock. -- Jason Dillaman - Original Message - > From: "NEVEU Stephane" > To:

Re: [ceph-users] "rbd diff" disparity vs mounted usage

2016-04-27 Thread Jason Dillaman
On Wed, Apr 27, 2016 at 2:07 PM, Tyler Wilson wrote: > $ rbd diff backup/cd4e5d37-3023-4640-be5a-5577d3f9307e | awk '{ SUM += $2 } > END { print SUM/1024/1024 " MB" }' > 49345.4 MB Is this a cloned image? That awk trick doesn't account for discarded regions (i.e. when column three says "zero" in

Re: [ceph-users] "rbd diff" disparity vs mounted usage

2016-04-27 Thread Jason Dillaman
/1024 " MB" }' > 49345.4 MB > > Thanks for the help. > > On Wed, Apr 27, 2016 at 12:22 PM, Jason Dillaman > wrote: >> >> On Wed, Apr 27, 2016 at 2:07 PM, Tyler Wilson >> wrote: >> > $ rbd diff backup/cd4e5d37-3023-4640-be5a-5577d3f9

Re: [ceph-users] snaps & consistency group

2016-05-02 Thread Jason Dillaman
There is no current capability to support snapshot consistency groups within RBD; however, support for snapshot consistency groups is currently being developed for the Ceph kraken release. On Sun, May 1, 2016 at 11:04 AM, Yair Magnezi wrote: > Hello Guys . > > I'm a little bit confused about ceph

Re: [ceph-users] snaps & consistency group

2016-05-03 Thread Jason Dillaman
On Tue, May 3, 2016 at 3:20 AM, Yair Magnezi wrote: > Does RBD volumes consistency group supported in Jewel ? can we take > consistent snapshots for volumes consistency group . No, this feature is being actively worked on for the Kraken release of Ceph (the next major release after Jewel). -- J

Re: [ceph-users] Hammer vs Jewel librbd performance testing and git bisection results

2016-05-11 Thread Jason Dillaman
Awesome work Mark! Comments / questions inline below: On Wed, May 11, 2016 at 9:21 AM, Mark Nelson wrote: > There are several commits of interest that have a noticeable effect on 128K > sequential read performance: > > > 1) https://github.com/ceph/ceph/commit/3a7b5e3 > > This commit was the firs

Re: [ceph-users] Hammer vs Jewel librbd performance testing and git bisection results

2016-05-11 Thread Jason Dillaman
On Wed, May 11, 2016 at 10:07 AM, Mark Nelson wrote: > Perhaps 0024677 or 3ad19ae introduced another regression that was being > masked by c474e4 and when 66e7464 improved the situation, the other > regression appeared? 0024677 is in Hammer as 7004149 and 3ad19ae is in Hammer as b38da480. I opene

Re: [ceph-users] Try to find the right way to enable rbd-mirror.

2016-05-12 Thread Jason Dillaman
On Thu, May 12, 2016 at 6:33 AM, Mika c wrote: > 4.) Both sites installed "rbd-mirror". > Start daemon "rbd-mirror" . > On site1:$sudo rbd-mirror -m 192.168.168.21:6789 > On site2:$sudo rbd-mirror -m 192.168.168.22:6789 Assuming you use keep "ceph" as the local cluster name and u

Re: [ceph-users] Try to find the right way to enable rbd-mirror.

2016-05-13 Thread Jason Dillaman
On Fri, May 13, 2016 at 6:39 AM, Mika c wrote: > Hi Dillaman, > Thank you for getting back to me. > My system is ubuntu, so I using "sudo rbd-mirror --cluster=local > --log-file=mirror.log --debug-rbd-mirror=20/5" instead. I was read your > reply but still confused. For upstart systems, you can r

[ceph-users] Maximum RBD image name length

2016-05-19 Thread Jason Dillaman
As of today, neither the rbd CLI nor librbd imposes any limit on the maximum length of an RBD image name, whereas krbd has roughly a 100 character limit and the OSDs have a default object name limit of roughly 2000 characters. While there is a patch under review to increase the krbd limit, it would

Re: [ceph-users] Enabling hammer rbd features on cluster with a few dumpling clients

2016-05-19 Thread Jason Dillaman
On Thu, May 19, 2016 at 12:15 PM, Dan van der Ster wrote: > I hope it will just refuse to > attach, rather than attach but allow bad stuff to happen. You are correct -- older librbd/krbd clients will refuse to open images that have unsupported features enabled. -- Jason

Re: [ceph-users] help removing an rbd image?

2016-05-24 Thread Jason Dillaman
Any chance you are using cache tiering? It's odd that you can see the objects through "rados ls" but cannot delete them with "rados rm". On Tue, May 24, 2016 at 4:34 PM, Kevan Rehm wrote: > Greetings, > > I have a small Ceph 10.2.1 test cluster using a 3-replicate pool based on 24 > SSDs configu

Re: [ceph-users] rbd mirror : space and io requirements ?

2016-06-02 Thread Jason Dillaman
On Wed, Jun 1, 2016 at 8:32 AM, Alexandre DERUMIER wrote: > Hi, > > I'm begin to look at rbd mirror features. > > How much space does it take ? Is it only a journal with some kind of list of > block changes ? There is a per-image journal which is a log of all modifications to the image. The log

Re: [ceph-users] what does the 'rbd watch ' mean?

2016-06-03 Thread Jason Dillaman
That command is used for debugging to show the notifications sent by librbd whenever image properties change. These notifications are used by other librbd clients with the same image open to synchronize state (e.g. a snapshot was created so instruct the other librbd client to refresh the image's h

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-05 Thread Jason Dillaman
Are you able to successfully run the following command successfully? rados -p glebe-sata get rbd_id.hypervtst-lun04 On Sun, Jun 5, 2016 at 8:49 PM, Adrian Saul wrote: > > I upgraded my Infernalis semi-production cluster to Jewel on Friday. While > the upgrade went through smoothly (aside fro

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-05 Thread Jason Dillaman
eph]# rados ls -p glebe-sata|grep rbd_id >> rbd_id.cloud2sql-lun01 >> rbd_id.glbcluster3-vm17 >> rbd_id.holder <<< a create that said it failed while I was debugging this >> rbd_id.pvtcloud-nfs01 >> rbd_id.hypervtst-lun05 >> rbd_id.test02 >> rbd_id.c

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-05 Thread Jason Dillaman
; rbd_id.holder <<< a create that said it failed while I was debugging this >> rbd_id.pvtcloud-nfs01 >> rbd_id.hypervtst-lun05 >> rbd_id.test02 >> rbd_id.cloud2sql-lun02 >> rbd_id.fiotest2 >> rbd_id.radmast02-lun04 >> rbd_id.hypervtst-lun04 >> rbd_i

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-06 Thread Jason Dillaman
I suspect it > might be related to the OSDs being restarted during the package upgrade > process before all libraries are upgraded. > > >> -Original Message- >> From: Jason Dillaman [mailto:jdill...@redhat.com] >> Sent: Monday, 6 June 2016 12:37 PM >>

Re: [ceph-users] RBD rollback error mesage

2016-06-07 Thread Jason Dillaman
Can you run "rbd info" against that image? I suspect it is a harmless but alarming error message. I actually just opened a tracker ticket this morning for a similar issue for rbd-mirror [1] when it bootstraps an image to a peer cluster. In that case, it was a harmless error message that we will

Re: [ceph-users] RBD rollback error mesage

2016-06-07 Thread Jason Dillaman
ect-map, fast-diff, > deep-flatten > flags: > parent: rbd/xenial-base@gold-copy > overlap: 8192 MB > > > Brendan > > > From: Jason Dillaman [jdill...@redhat.com] > Sent: Tuesday, June 07, 2016 6:56 P

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-13 Thread Jason Dillaman
Alternatively, if you are using RBD format 2 images, you can run "rados -p listomapvals rbd_directory" to ensure it has a bunch of key/value pairs for your images. There was an issue noted [1] after upgrading to Jewel where the omap values were all missing on several v2 RBD image headers -- resul

Re: [ceph-users] Ceph and Openstack

2016-06-14 Thread Jason Dillaman
On Tue, Jun 14, 2016 at 8:15 AM, Fran Barrera wrote: > 2016-06-14 14:02:54.634 2256 DEBUG glance_store.capabilities [-] Store > glance_store._drivers.rbd.Store doesn't support updating dynamic storage > capabilities. Please overwrite 'update_capabilities' method of the store to > implement updatin

Re: [ceph-users] librados and multithreading

2016-06-14 Thread Jason Dillaman
On Fri, Jun 10, 2016 at 12:37 PM, Юрий Соколов wrote: > Good day, all. > > I found this issue: https://github.com/ceph/ceph/pull/5991 > > Did this issue affected librados ? No -- this affected the start-up and shut-down of librbd as described in the associated tracker ticket. > Were it safe to u

Re: [ceph-users] rbd ioengine for fio

2016-06-16 Thread Jason Dillaman
On Thu, Jun 16, 2016 at 8:14 PM, Mavis Xiang wrote: > clientname=client.admin Try "clientname=admin" -- I think it's treating the client "name" as the "id", so specifying "client.admin" is really treated as "client.client.admin". -- Jason ___ ceph-use

Re: [ceph-users] librbd compatibility

2016-06-21 Thread Jason Dillaman
The librbd API is stable between releases. While new API methods might be added, the older API methods are kept for backwards compatibility. For example, qemu-kvm under RHEL 7 is built against a librbd from Firefly but can function using a librbd from Jewel. On Tue, Jun 21, 2016 at 1:47 AM, min

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-06-22 Thread Jason Dillaman
I'm not sure why I never received the original list email, so I apologize for the delay. Is /dev/sda1, from your example, fresh with no data to actually discard or does it actually have lots of data to discard? Thanks, On Wed, Jun 22, 2016 at 1:56 PM, Brian Andrus wrote: > I've created a downstr

Re: [ceph-users] image map failed

2016-06-23 Thread Jason Dillaman
On Thu, Jun 23, 2016 at 10:16 AM, Ishmael Tsoaela wrote: > cluster_master@nodeC:~$ rbd --image data_01 -p data info > rbd image 'data_01': > size 102400 MB in 25600 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.105f2ae8944a > format: 2 > features: layering, exclusive-lock, obj

  1   2   3   4   5   6   7   8   9   >