. The whole object would not be written to the OSDs unless you
wrote data to the whole object.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Xu (Simon) Chen"
To: ceph-users@lists.ceph.com
Sent: Wednesday, February 25,
/projects/rbd/issues?
Thanks,
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "koukou73gr"
To: ceph-users@lists.ceph.com
Sent: Monday, March 2, 2015 7:16:08 AM
Subject: [ceph-users] qemu-kvm and cloned rbd image
Hello
** rbd/small and backup/small are now consistent through snap2. import-diff
automatically created backup/small@snap2 after importing all changes.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Steve Anthony"
To:
An RBD image is split up into (by default 4MB) objects within the OSDs. When
you delete an RBD image, all the objects associated with the image are removed
from the OSDs. The objects are not securely erased from the OSDs if that is
what you are asking.
--
Jason Dillaman
Red Hat
dilla
s rbd_directory/rbd_children" to see the data within the files.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Matthew Monaco"
To: ceph-users@lists.ceph.com
Sent: Sunday, April 12, 2015 10:57:46 PM
Subject: [ceph-use
Yes, when you flatten an image, the snapshots will remain associated to the
original parent. This is a side-effect from how librbd handles CoW with
clones. There is an open RBD feature request to add support for flattening
snapshots as well.
--
Jason Dillaman
Red Hat
dilla
ldren object so that librbd no longer thinks
any image is a child of another.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Matthew Monaco"
To: "Jason Dillaman"
Cc: ceph-users@lists.ceph.com
Sent: Monday, Apri
Can you add "debug rbd = 20" your ceph.conf, re-run the command, and provide a
link to the generated librbd log messages?
Thanks,
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Nikola Ciprich"
To: ceph-users
'--image-features' when creating the image?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Nikola Ciprich"
To: "Jason Dillaman"
Cc: ceph-users@lists.ceph.com
Sent: Monday, April 20, 2015 12:41:26 PM
into Hammer at
some point in the future. Therefore, I would recommend waiting for the full
toolset to become available.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Christoph Adomeit"
To: ceph-users@lists.ceph.com
Sent: Tuesda
The issue appears to be tracked with the following BZ for RHEL 7:
https://bugzilla.redhat.com/show_bug.cgi?id=1187533
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Wido den Hollander"
To: "Somnath Ro
You are correct -- it is little endian like the other values. I'll open a
ticket to correct the document.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Ultral"
To: ceph-us...@ceph.com
Sent: Thursday, May 7,
two
snapshots and no trim operations released your changes back? If you diff from
move2db24-20150428 to HEAD, do you see all your changes?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Ultral"
To: "ceph-users&qu
a few kilobyes of
deltas)? Also, would it be possible for you to create a new, test image in the
same pool, snapshot it, use 'rbd bench-write' to generate some data, and then
verify if export-diff is properly working against the new image?
--
Jason Dillaman
Red Hat
dilla..
/master/install/get-packages/#add-ceph-development
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Pavel V. Kaygorodov"
To: "Tuomas Juntunen"
Cc: "ceph-users"
Sent: Tuesday, May 12, 2015 3:55:21 PM
Subjec
e your issues on Giant and was unable to recreate
it. I would normally ask for a log dump with 'debug rbd = 20', but given the
size of your image, that log will be astronomically large.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message ---
th/to/my/new/ceph.conf" QEMU parameter where the RBD cache is
explicitly disabled [2].
[1]
http://git.qemu.org/?p=qemu.git;a=blob;f=block/rbd.c;h=fbe87e035b12aab2e96093922a83a3545738b68f;hb=HEAD#l478
[2] http://ceph.com/docs/master/rbd/qemu-rbd/#usage
--
Jason Dillaman
Red
> On Mon, Jun 8, 2015 at 10:43 PM, Josh Durgin wrote:
> > On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote:
> >>
> >> Hi,
>
> looking at the latest version of QEMU,
> >>
> >>
> >> It's seem that it's was already this behaviour since the add of rbd_cache
> >> parsing in rbd.c by josh in 2
> In the past we've hit some performance issues with RBD cache that we've
> fixed, but we've never really tried pushing a single VM beyond 40+K read
> IOPS in testing (or at least I never have). I suspect there's a couple
> of possibilities as to why it might be slower, but perhaps joshd can
> chi
cs (or can you gather any statistics) that indicate the
percentage of block-size, zeroed extents within the clone images' RADOS
objects? If there is a large amount of waste, it might be possible /
worthwhile to optimize how RBD handles copy-on-write operations against the
clone.
--
Jas
will locate all associated RADOS objects, download the
objects one at a time, and perform a scan for fully zeroed blocks. It's not
the most CPU efficient script, but it should get the job done.
[1] http://fpaste.org/248755/43803526/
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla
There currently is no mechanism to rename snapshots without hex editing the RBD
image header data structure. I created a new Ceph feature request [1] to add
this ability in the future.
[1] http://tracker.ceph.com/issues/12678
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla
It sounds like you have rados CLI tool from an earlier Ceph release (< Hammer)
installed and it is attempting to use the librados shared library from a newer
(>= Hammer) version of Ceph.
Jason
- Original Message -
> From: "Aakanksha Pudipeddi-SSI"
> To: ceph-us...@ceph.com
> Sent:
That rbd CLI command is a new feature that will be included with the upcoming
infernalis release. In the meantime, you can use this approach [1] to estimate
your RBD image usage.
[1] http://ceph.com/planet/real-size-of-a-ceph-rbd-image/
--
Jason Dillaman
Red Hat Ceph Storage Engineering
Unfortunately, the tool the dynamically enable/disable image features (rbd
feature disable ) was added during the Infernalis
development cycle. Therefore, in the short-term you would need to recreate the
images via export/import or clone/flatten.
There are several object map / exclusive loc
that issue?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Chu Duc Minh"
To: ceph-de...@vger.kernel.org, "ceph-users@lists.ceph.com >>
ceph-users@lists.ceph.com"
Sent: Friday, November 7, 2014 7:05:5
In the longer term, there is an in-progress RBD feature request to add a new
RBD command to see image disk usage: http://tracker.ceph.com/issues/7746
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Sébastien Han"
T
> I have a coredump with the size of 1200M compressed .
>
> Where shall i put the dump ?
>
I believe you can use the ceph-post-file utility [1] to upload the core and
your current package list to ceph.com.
Jason
[1] http://ceph.com/docs/master/man/8/ceph-post-file/
__
Any particular reason why you have the image mounted via the kernel client
while performing a benchmark? Not to say this is the reason for the crash, but
strange since 'rbd bench-write' will not test the kernel IO speed since it uses
the user-mode library. Are you able to test bench-write with
> The client version is what was installed by the ceph-deploy install
> ceph-client command. Via the debian-hammer repo. Per the quickstart doc.
> Are you saying I need to install a different client version somehow?
You listed the version as 0.80.10 which is a Ceph Firefly release -- Hammer is
0.
This is usually indicative of the same tracepoint event being included by both
a static and dynamic library. See the following thread regarding this issue
within Ceph when LTTng-ust was first integrated [1]. Since I don't have any
insight into your application, are you somehow linking against
ifying the image while at the same time not
crippling other use cases. librbd also supports cooperative exclusive lock
transfer, which is used in the case of qemu VM migrations where the image needs
to be opened R/W by two clients at the same time.
--
Jason Dillaman
- Original Mes
You can run the program under 'gdb' with a breakpoint on the 'abort' function
to catch the program's abnormal exit. Assuming you have debug symbols
installed, you should hopefully be able to see which probe is being
re-registered.
--
Jason Dillaman
- Orig
As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph project
only due to the fact that EPEL 7 doesn't provide the required packages [1].
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1235461
--
Jason Dillaman
- Original Message -
> From: "Paul Man
> On 22/09/15 17:46, Jason Dillaman wrote:
> > As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph
> > project only due to the fact that EPEL 7 doesn't provide the required
> > packages [1].
>
> interesting. so basically our program migh
ourself.
The new exclusive-lock feature is managed via 'rbd feature enable/disable'
commands and does ensure that only the current lock owner can manipulate the
RBD image. It was introduced to support the RBD object map feature (which can
track which backing RADOS objects are in-use in order
It looks like the issue you are experiencing was fixed in the Infernalis/master
branches [1]. I've opened a new tracker ticket to backport the fix to Hammer
[2].
--
Jason Dillaman
[1]
https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e
[2]
approach via "rbd lock
add/remove" to verify that no other client has the image mounted before
attempting to mount it locally.
--
Jason Dillaman
- Original Message -
> From: "Allen Liao"
> To: ceph-users@lists.ceph.com
> Sent: Wednesday, September 23, 201
The following advice assumes these images don't have associated snapshots
(since keeping the non-sparse snapshots will keep utilizing the storage space):
Depending on how you have your images set up, you could snapshot and clone the
images, flatten the newly created clone, and delete the origina
isn't enabled.
[1] https://github.com/ceph/ceph/pull/6135
--
Jason Dillaman
- Original Message -
> From: "Ken Dreyer"
> To: "Goncalo Borges"
> Cc: ceph-users@lists.ceph.com
> Sent: Thursday, October 8, 2015 11:58:27 AM
> Subject: Re: [ceph-users] A
mental, you could
install the infernalis-based rbd tools from the Ceph gitbuilder [1] into a
sandbox environment and use the tool against your pre-infernalis cluster.
[1] http://ceph.com/gitbuilder.cgi
--
Jason Dillaman
- Original Message -
> From: "Corin Langosch"
>
o the object, so they
will be read via LevelDB or RocksDB (depending on your configuration) within
the object's PG's OSD.
--
Jason Dillaman
- Original Message -
> From: "Allen Liao"
> To: ceph-users@lists.ceph.com
> Sent: Monday, October 12, 2015 2:52
ite
operations by decoupling objects from the underlying filesystem's actual
storage path.
[1]
https://github.com/ceph/ceph/blob/master/doc/rados/configuration/journal-ref.rst
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
uncate, overwrite, etc).
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
There is no such interface currently on the librados / OSD side to abort IO
operations. Can you provide some background on your use-case for aborting
in-flight IOs?
--
Jason Dillaman
- Original Message -
> From: "min fang"
> To: ceph-users@lists.ceph.com
> Se
Can you provide more details on your setup and how you are running the rbd
export? If clearing the pagecache, dentries, and inodes solves the issue, it
sounds like it's outside of Ceph (unless you are exporting to a CephFS or krbd
mount point).
--
Jason Dillaman
- Original Me
> On Tue, 20 Oct 2015, Jason Dillaman wrote:
> > There is no such interface currently on the librados / OSD side to abort
> > IO operations. Can you provide some background on your use-case for
> > aborting in-flight IOs?
>
> The internal Objecter has a cancel interf
] http://tracker.ceph.com/issues/13559
--
Jason Dillaman
- Original Message -
> From: "Andrei Mikhailovsky"
> To: ceph-us...@ceph.com
> Sent: Wednesday, October 21, 2015 8:17:39 AM
> Subject: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4
> Hello
command-line properties [1]. If you have "rbd cache =
true" in your ceph.conf, it would override "cache=none" in your qemu
command-line.
[1] https://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg03078.html
--
Jason Dillaman
afe to detach a clone from a parent image even if snapshots exist due to the
changes to copyup.
--
Jason Dillaman
- Original Message -
> From: "Zhongyan Gu"
> To: dilla...@redhat.com
> Sent: Thursday, October 22, 2015 5:11:56 AM
> Subject: how to understand deep
ter flatten, child
> snapshot still has parent snap info?
> overlap: 1024 MB
Because deep-flatten wasn't enabled on the clone.
> Another question is since deep-flatten operations are applied to cloned
> image, why we need to create p
would immediately race to
re-establish the lost watch/notify connection before you could disassociate the
cache tier.
--
Jason Dillaman
- Original Message -
> From: "Robert LeBlanc"
> To: ceph-users@lists.ceph.com
> Sent: Monday, October 26, 2015 12:22:06 PM
> Subject
> Hi Jason dillaman
> Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when
> I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID
> flag.
> When I create a rbd with “—image-features = 13 ” , we enable object-map
> featu
r its been enabled.
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It sounds like you ran into this issue [1]. It's been fixed in upstream master
and infernalis branches, but the backport is still awaiting release on hammer.
[1] http://tracker.ceph.com/issues/12885
--
Jason Dillaman
- Original Message -
> From: "Giuseppe Civitella&
I don't see the read request hitting the wire, so I am thinking your client
cannot talk to the primary PG for the 'rb.0.16cf.238e1f29.' object.
Try adding "debug objecter = 20" to your configuration to get more details.
--
Jason Dillaman
- Orig
I'd recommend running your program through valgrind first to see if something
pops out immediately.
--
Jason Dillaman
- Original Message -
> From: "min fang"
> To: ceph-users@lists.ceph.com
> Sent: Saturday, October 31, 2015 10:43:22 PM
> Subject: Re
Most likely not going to be related to 13045 since you aren't actively
exporting an image diff. The most likely problem is that the RADOS IO context
is being closed prior to closing the RBD image.
--
Jason Dillaman
- Original Message -
> From: "Voloshanenko Igor&
I can't say that I know too much about Cloudstack's integration with RBD to
offer much assistance. Perhaps if Cloudstack is receiving an exception for
something, it is not properly handling object lifetimes in this case.
--
Jason Dillaman
- Original Message
--
Jason Dillaman
- Original Message -
> From: "Jackie"
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, November 3, 2015 8:47:19 PM
> Subject: [ceph-users] Can snapshot of image still be used while flattening
> the image?
>
> Hi experts,
>
all users of
image2 (and its descendants) that its parent has been removed. If you had a
clone of image2 open at the time, the clone of image2 would then know it would
no longer need to access image1 since the link from image1 to image2 was
removed.
--
Jason Dillaman
> - Origi
/wiki/Clustered_SCSI_target_using_RBD
--
Jason Dillaman
- Original Message -
> From: "Gaetan SLONGO"
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, November 3, 2015 10:00:59 AM
> Subject: [ceph-users] iSCSI over RDB is a good idea ?
>
> Dear Ceph
, it appears that
oVirt might even have some development to containerize a small Cinder/Glance
OpenStack setup [2].
[1] https://www.youtube.com/watch?v=elEkGfjLITs
[2] http://www.ovirt.org/CinderGlance_Docker_Integration
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla...@redhat.com
Can you retry with 'rbd --rbd-cache=false -p images export joe /root/joe.raw'?
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner"
> To: "Jason Dillaman"
> Cc: ceph-us...@ceph.com
> Sent: Thursday, November 5, 2015 4:14:28 PM
&g
request than your cache can allocate.
[1] http://tracker.ceph.com/issues/13388
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner"
> To: "Jason Dillaman"
> Cc: ceph-us...@ceph.com
> Sent: Thursday, November 5, 2015 4:24:29 PM
> Subject: Re: [
On the bright side, at least your week of export-related pain should result in
a decent speed boost when your clients get 64MB of cache instead of 64B.
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner"
> To: "Jason Dillaman"
> Cc: ceph-us
volume
internal to the VM.
--
Jason Dillaman
- Original Message -
> From: "Lazuardi Nasution"
> To: ceph-users@lists.ceph.com
> Sent: Sunday, November 8, 2015 12:34:16 PM
> Subject: [ceph-users] Multiple Cache Pool with Single Storage Pool
> Hi,
> I
. This is actually what librbd does internally for the C interface.
--
Jason Dillaman
- Original Message -
> From: "Nikola Ciprich"
> To: "ceph-users"
> Sent: Sunday, November 8, 2015 4:27:13 AM
> Subject: [ceph-users] python binding - snap rollback
H map.
You are correct that by using a local (host) persistent cache, you have
effectively removed the ability to safely live-migrate.
--
Jason Dillaman
- Original Message -
> From: "Lazuardi Nasution"
> To: "Jason Dillaman"
> Cc: ceph-users@lists.ceph.co
I've seen this issue before when you (somehow) mix-and-match librbd, librados,
and rbd builds on the same machine. The packaging should prevent you from
mixing versions, but perhaps somehow you have different package versions
installed.
--
Jason Dillaman
- Original Me
Does child image "images/0a38b10d-2184-40fc-82b8-8bbd459d62d2" have snapshots?
--
Jason Dillaman
- Original Message -
> From: "Jackie"
> To: ceph-users@lists.ceph.com
> Sent: Thursday, November 19, 2015 12:05:12 AM
> Subject: [ceph-users] Aft
Couldn't hurt to open a feature request for this on the tracker.
--
Jason Dillaman
- Original Message -
> From: "Haomai Wang"
> To: "Allen Liao"
> Cc: ceph-users@lists.ceph.com
> Sent: Saturday, November 21, 2015 11:57:11 AM
> Subject: Re:
RBD_FEATURE_STRIPINGV2 = 2
RBD_FEATURE_EXCLUSIVE_LOCK = 4
RBD_FEATURE_OBJECT_MAP = 8
RBD_FEATURE_FAST_DIFF = 16
RBD_FEATURE_DEEP_FLATTEN = 32
RBD_FEATURE_JOURNALING = 64
--
Jason Dillaman
- Original Message -
> From: "Gregory Farnum"
> To: "NEVEU Stephane"
lock_exclusive() / lock_shared() methods are not related to image watchers.
Instead, it is tied to the advisory locking mechanism -- and list_lockers() can
be used to query who has a lock.
--
Jason Dillaman
- Original Message -
> From: "NEVEU Stephane"
> To:
On Wed, Apr 27, 2016 at 2:07 PM, Tyler Wilson wrote:
> $ rbd diff backup/cd4e5d37-3023-4640-be5a-5577d3f9307e | awk '{ SUM += $2 }
> END { print SUM/1024/1024 " MB" }'
> 49345.4 MB
Is this a cloned image? That awk trick doesn't account for discarded
regions (i.e. when column three says "zero" in
/1024 " MB" }'
> 49345.4 MB
>
> Thanks for the help.
>
> On Wed, Apr 27, 2016 at 12:22 PM, Jason Dillaman
> wrote:
>>
>> On Wed, Apr 27, 2016 at 2:07 PM, Tyler Wilson
>> wrote:
>> > $ rbd diff backup/cd4e5d37-3023-4640-be5a-5577d3f9
There is no current capability to support snapshot consistency groups
within RBD; however, support for snapshot consistency groups is
currently being developed for the Ceph kraken release.
On Sun, May 1, 2016 at 11:04 AM, Yair Magnezi wrote:
> Hello Guys .
>
> I'm a little bit confused about ceph
On Tue, May 3, 2016 at 3:20 AM, Yair Magnezi wrote:
> Does RBD volumes consistency group supported in Jewel ? can we take
> consistent snapshots for volumes consistency group .
No, this feature is being actively worked on for the Kraken release of
Ceph (the next major release after Jewel).
--
J
Awesome work Mark! Comments / questions inline below:
On Wed, May 11, 2016 at 9:21 AM, Mark Nelson wrote:
> There are several commits of interest that have a noticeable effect on 128K
> sequential read performance:
>
>
> 1) https://github.com/ceph/ceph/commit/3a7b5e3
>
> This commit was the firs
On Wed, May 11, 2016 at 10:07 AM, Mark Nelson wrote:
> Perhaps 0024677 or 3ad19ae introduced another regression that was being
> masked by c474e4 and when 66e7464 improved the situation, the other
> regression appeared?
0024677 is in Hammer as 7004149 and 3ad19ae is in Hammer as b38da480.
I opene
On Thu, May 12, 2016 at 6:33 AM, Mika c wrote:
> 4.) Both sites installed "rbd-mirror".
> Start daemon "rbd-mirror" .
> On site1:$sudo rbd-mirror -m 192.168.168.21:6789
> On site2:$sudo rbd-mirror -m 192.168.168.22:6789
Assuming you use keep "ceph" as the local cluster name and u
On Fri, May 13, 2016 at 6:39 AM, Mika c wrote:
> Hi Dillaman,
> Thank you for getting back to me.
> My system is ubuntu, so I using "sudo rbd-mirror --cluster=local
> --log-file=mirror.log --debug-rbd-mirror=20/5" instead. I was read your
> reply but still confused.
For upstart systems, you can r
As of today, neither the rbd CLI nor librbd imposes any limit on the
maximum length of an RBD image name, whereas krbd has roughly a 100
character limit and the OSDs have a default object name limit of roughly
2000 characters. While there is a patch under review to increase the krbd
limit, it would
On Thu, May 19, 2016 at 12:15 PM, Dan van der Ster wrote:
> I hope it will just refuse to
> attach, rather than attach but allow bad stuff to happen.
You are correct -- older librbd/krbd clients will refuse to open
images that have unsupported features enabled.
--
Jason
Any chance you are using cache tiering? It's odd that you can see the
objects through "rados ls" but cannot delete them with "rados rm".
On Tue, May 24, 2016 at 4:34 PM, Kevan Rehm wrote:
> Greetings,
>
> I have a small Ceph 10.2.1 test cluster using a 3-replicate pool based on 24
> SSDs configu
On Wed, Jun 1, 2016 at 8:32 AM, Alexandre DERUMIER wrote:
> Hi,
>
> I'm begin to look at rbd mirror features.
>
> How much space does it take ? Is it only a journal with some kind of list of
> block changes ?
There is a per-image journal which is a log of all modifications to
the image. The log
That command is used for debugging to show the notifications sent by librbd
whenever image properties change. These notifications are used by other
librbd clients with the same image open to synchronize state (e.g. a
snapshot was created so instruct the other librbd client to refresh the
image's h
Are you able to successfully run the following command successfully?
rados -p glebe-sata get rbd_id.hypervtst-lun04
On Sun, Jun 5, 2016 at 8:49 PM, Adrian Saul
wrote:
>
> I upgraded my Infernalis semi-production cluster to Jewel on Friday. While
> the upgrade went through smoothly (aside fro
eph]# rados ls -p glebe-sata|grep rbd_id
>> rbd_id.cloud2sql-lun01
>> rbd_id.glbcluster3-vm17
>> rbd_id.holder <<< a create that said it failed while I was debugging this
>> rbd_id.pvtcloud-nfs01
>> rbd_id.hypervtst-lun05
>> rbd_id.test02
>> rbd_id.c
; rbd_id.holder <<< a create that said it failed while I was debugging this
>> rbd_id.pvtcloud-nfs01
>> rbd_id.hypervtst-lun05
>> rbd_id.test02
>> rbd_id.cloud2sql-lun02
>> rbd_id.fiotest2
>> rbd_id.radmast02-lun04
>> rbd_id.hypervtst-lun04
>> rbd_i
I suspect it
> might be related to the OSDs being restarted during the package upgrade
> process before all libraries are upgraded.
>
>
>> -Original Message-
>> From: Jason Dillaman [mailto:jdill...@redhat.com]
>> Sent: Monday, 6 June 2016 12:37 PM
>>
Can you run "rbd info" against that image? I suspect it is a harmless
but alarming error message. I actually just opened a tracker ticket
this morning for a similar issue for rbd-mirror [1] when it bootstraps
an image to a peer cluster. In that case, it was a harmless error
message that we will
ect-map, fast-diff,
> deep-flatten
> flags:
> parent: rbd/xenial-base@gold-copy
> overlap: 8192 MB
>
>
> Brendan
>
>
> From: Jason Dillaman [jdill...@redhat.com]
> Sent: Tuesday, June 07, 2016 6:56 P
Alternatively, if you are using RBD format 2 images, you can run
"rados -p listomapvals rbd_directory" to ensure it has
a bunch of key/value pairs for your images. There was an issue noted
[1] after upgrading to Jewel where the omap values were all missing on
several v2 RBD image headers -- resul
On Tue, Jun 14, 2016 at 8:15 AM, Fran Barrera wrote:
> 2016-06-14 14:02:54.634 2256 DEBUG glance_store.capabilities [-] Store
> glance_store._drivers.rbd.Store doesn't support updating dynamic storage
> capabilities. Please overwrite 'update_capabilities' method of the store to
> implement updatin
On Fri, Jun 10, 2016 at 12:37 PM, Юрий Соколов wrote:
> Good day, all.
>
> I found this issue: https://github.com/ceph/ceph/pull/5991
>
> Did this issue affected librados ?
No -- this affected the start-up and shut-down of librbd as described
in the associated tracker ticket.
> Were it safe to u
On Thu, Jun 16, 2016 at 8:14 PM, Mavis Xiang wrote:
> clientname=client.admin
Try "clientname=admin" -- I think it's treating the client "name" as
the "id", so specifying "client.admin" is really treated as
"client.client.admin".
--
Jason
___
ceph-use
The librbd API is stable between releases. While new API methods
might be added, the older API methods are kept for backwards
compatibility. For example, qemu-kvm under RHEL 7 is built against a
librbd from Firefly but can function using a librbd from Jewel.
On Tue, Jun 21, 2016 at 1:47 AM, min
I'm not sure why I never received the original list email, so I
apologize for the delay. Is /dev/sda1, from your example, fresh with
no data to actually discard or does it actually have lots of data to
discard?
Thanks,
On Wed, Jun 22, 2016 at 1:56 PM, Brian Andrus wrote:
> I've created a downstr
On Thu, Jun 23, 2016 at 10:16 AM, Ishmael Tsoaela wrote:
> cluster_master@nodeC:~$ rbd --image data_01 -p data info
> rbd image 'data_01':
> size 102400 MB in 25600 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.105f2ae8944a
> format: 2
> features: layering, exclusive-lock, obj
1 - 100 of 825 matches
Mail list logo