Thank you for the reply.
This cluster still is on hammer 0.94.11. I do have both kernel clients
and libvirt based systems using it. Unfortunately that makes balancer
module out of the question (I am aware of it though).
I have couple of those (jewel / hammer) in production that are coming
Which version of Ceph are you running? Do you have any kernel clients? If
yes, can which version kernel? These questions are all leading to see if
you can enable the Luminous/Mimic mgr module balancer with upmap. If you
can, it is hands down the best way to balance your cluster.
On Sat, Oct 27, 20
If your had a specific location for the wal it would show up there. If
there is no entry for the wal, then it is using the same seeing as the db.
On Sun, Oct 28, 2018, 9:26 PM Robert Stanford
wrote:
>
> Mehmet: it doesn't look like wal is mentioned in the osd metadata. I see
> bluefs slow, blu
Mehmet: it doesn't look like wal is mentioned in the osd metadata. I see
bluefs slow, bluestore bdev, and bluefs db mentioned only.
On Sun, Oct 28, 2018 at 1:48 PM wrote:
> IIRC there is a Command like
>
> Ceph osd Metadata
>
> Where you should be able to find Information like this
>
> Hab
> -
We accidentally found ourselves upgraded from 12.2.8 to 13.2.2 after a
ceph-deploy install went awry (we were expecting it to upgrade to 12.2.9 and
not jump a major release without warning)
Anyway .. as a result, we ended up with an mds journal error and 1 daemon
reporting as damaged
Having g
As a little "heads-up":
If you are running Ubuntu Bionic 18.04, or Xenial 16.04 with "HWE"
kernels, and have systems running under 4.15.0-36 - which was the
default between 2018-10-01 and 2018-10-22 - please consider upgrading to
the latest 4.15.0-38 ASAP (or downgrade to 4.15.0-34).
4.15.0-36 ha
This feature is forthcoming with the Nautilus release of Ceph:
$ rbd info image1
rbd image 'image1':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 101f86439b20
block_name_prefix: rbd_data.101f86439b20
format: 2
features: layering, exclusive-lock, object-map, fast-diff, d
You can also use "rbd disk-usage " to compute the
usage of a snapshot.
On Sun, Oct 28, 2018 at 4:39 PM Paul Emmerich wrote:
>
> "rbd diff" tells you what changed in an image since a snapshot:
>
> rbd diff --from-snap /
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster
"rbd diff" tells you what changed in an image since a snapshot:
rbd diff --from-snap /
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Am So., 28. Okt. 2018 um 20:38
Hi,
with Filestore, to estimate the weight of snapshot we use a simple find
script on each OSD :
nice find "$OSDROOT/$OSDDIR/current/" \
-type f -not -name '*_head_*' -not -name '*_snapdir_*' \
-printf '%P\n'
Then we agregate by image prefix, and obtain an estimation of each
IIRC there is a Command like
Ceph osd Metadata
Where you should be able to find Information like this
Hab
- Mehmet
Am 21. Oktober 2018 19:39:58 MESZ schrieb Robert Stanford
:
> I did exactly this when creating my osds, and found that my total
>utilization is about the same as the sum of the
The only way to check this is to check each individual object the RBD
consists of:
rbd info /
--> block_name_prefix: rbd_data.XX
rados -p rbd stat rbd_data.X.
rados -p rbd stat rbd_data.X.0001
rados -p rbd stat rbd_data.X.000
Hi!
Is there an easy way to check when an image was last modified?
I want to make sure, that the images I want to clean up, were not used for
a long time.
Kind regards
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
Hello Mike, Jason,
Assuming we adapt the current LIO configuration scripts and put QLogic HBAs in
our SCSI targets, could we use FC instead of iSCSI as a SCSI transport protocol
with LIO ? Would this still work with multipathing and ALUA ?
Do you see any issues coming from this type of configura
14 matches
Mail list logo